diff -Nru gnocchi-3.1.2/AUTHORS gnocchi-3.1.9/AUTHORS --- gnocchi-3.1.2/AUTHORS 2017-03-20 12:31:21.000000000 +0000 +++ gnocchi-3.1.9/AUTHORS 2017-07-24 16:25:26.000000000 +0000 @@ -64,12 +64,14 @@ jizilian liu-sheng liusheng +liyi lvdongbing melissaml nellysmitt root shengping zhang shengping zhang +sum12 twm2016 xialinjuan xiaozhuangqing diff -Nru gnocchi-3.1.2/ChangeLog gnocchi-3.1.9/ChangeLog --- gnocchi-3.1.2/ChangeLog 2017-03-20 12:31:21.000000000 +0000 +++ gnocchi-3.1.9/ChangeLog 2017-07-24 16:25:25.000000000 +0000 @@ -1,6 +1,62 @@ CHANGES ======= +3.1.9 +----- + +* Make compatible with tenacity<4.0.0 +* Only retry connection to external components in metricd + +3.1.8 +----- + +* tests: fix gabbi aggregation tests +* tests: don't use wait without pid +* ceph: Remove the async read of incoming measures + +3.1.7 +----- + +* ceph: change default timeout +* ceph: Allow to configure timeout +* rest: return 404 when granularity does not exist +* Install gnocchi flavor of pifpaf +* tests: Update travis/tox configuration +* travis: remove install step +* travis: fetch all refs for docs +* Install pifpaf with ceph extra + +3.1.6 +----- + +* storage: fix resample on empty metric +* Fix doc build +* archive\_policy: Raise Error if calculated points is < 0 + +3.1.5 +----- + +* storage: introduce add\_measures\_batch for Ceph +* Limit Sphinx to <1.6.0 +* Add travis file for Gnocchi 3.1 + +3.1.4 +----- + +* swift: make sure to retry if the client cannot find Swift +* retry to get block on error + +3.1.3 +----- + +* Use NULL as creator in ResourceUUID conversion +* Revert "block pbr 2.1.0" +* block pbr 2.1.0 +* default\_aggregation\_methods configuration worked in gnocchi-upgrade +* devstack: Allow to change the processing delay +* Correct bad use response\_strings in live.yaml +* tempest: don't use Ceilometer's resource type + 3.1.2 ----- @@ -10,11 +66,11 @@ 3.1.1 ----- -* s3: set maximum length for s3_bucket_prefix option +* s3: set maximum length for s3\_bucket\_prefix option * s3: use a different bucket prefix for each test * s3: fix new metric listing * s3: fix minimum botocore version -* ensure original_resource_id is not none +* ensure original\_resource\_id is not none * fix bad slash migration * Update .gitreview for stable/3.1 @@ -23,7 +79,7 @@ * tools: make measure injector works without gnocchiclient * paginate ceph report generation -* indexer: make sure original_resource_id is never NULL +* indexer: make sure original\_resource\_id is never NULL * rest: string → UUID conversion for resource.id to be unique per user * tests: specify columns to compare in resource list * modernise gabbi tests @@ -48,7 +104,7 @@ * Revert "indexer: fix datetime with mysql >= 5.7.17" * Update Pandas requirements to 0.18 * Add a release note about storage/incoming split -* drop gnocchi__container object +* drop gnocchi\_\_container object * indexer: catch another mysql exception * stop validating aggregation on init * increase timeout @@ -59,7 +115,7 @@ * carbonara: add resample() benchmark * utils: use proper timedelta conversion * tests: increase benchmark timeout for Carbonara -* sqlalchemy: fix compat search on created_by_project_id +* sqlalchemy: fix compat search on created\_by\_project\_id * serialise: simplify array format * carbonara: add merge() benchmark * carbonara: don't use groupby for split @@ -69,28 +125,28 @@ * carbonara: Add benchmark for split() * carbonara: use numpy for unserialization * carbonara: use numpy for serialization -* carbonara: Don't use clean_ts() +* carbonara: Don't use clean\_ts() * carbonara: handle timestamps from struct with numpy * carbonara: remove a pandas.iteritems() -* carbonara: prepare datetime for pandas.to_datetime() +* carbonara: prepare datetime for pandas.to\_datetime() * add metricd tester for profiling * rest: returns orignal resource id -* indexer: fix migration script "no_more_slash" +* indexer: fix migration script "no\_more\_slash" * Remove workaround to upgrade from 2.2.0 * utils: allow ResourceUUID to convert UUID * cleanup invalid upgrade errors -* archive_policy: provide a boolean storage archive policy by default +* archive\_policy: provide a boolean storage archive policy by default * storage: add more debug information to trace behaviour -* opts: list entry points with pkg_resources rather than stevedore +* opts: list entry points with pkg\_resources rather than stevedore * mysql: fix timestamp upgrade * rest: reject / as resource id and metric name * tests: Fix upgrade script * ceph: enhance the documentation * Introduce "basic" authentication mechanism * don't override logging when loading alembic module -* upgrade: fix OS_AUTH_TYPE variable name +* upgrade: fix OS\_AUTH\_TYPE variable name * Add sem-ver flag so pbr generates correct version -* ceph: fix data compression when oldest_mutable_timestamp == next(key) +* ceph: fix data compression when oldest\_mutable\_timestamp == next(key) * Add gnocchi-config-generator * Remove broken script * Fix a typo in estimated sizing per metric under archive policies @@ -99,24 +155,24 @@ * ceph: Workaround for oslo.config interpolation bug * indexer: fix resource type update * Tests to confirm resources are searchable -* rest: introduce auth_helper to filter resources -* rest: add auth_mode to pick authentication mode +* rest: introduce auth\_helper to filter resources +* rest: add auth\_mode to pick authentication mode * Move default policy.json away from etc/ * Ship api-paste.ini out of etc/ * rest: make sure 409 is returned when double creating resource with non-UUID * tools: import a small tools to compute size of archive policies -* archive_policy: lighten the default archive policies +* archive\_policy: lighten the default archive policies * run-tests: use case rather than if/elif/else * metricd: move metricd options in metricd group * storage: remove temporary incoming setup * Introduce new storage groups for storage * devstack: prepare ceph keyring before using it -* Adjust testr group_regex to not group on 'prefix' +* Adjust testr group\_regex to not group on 'prefix' * storage: split s3 driver * Fix expected content-type and move CORS tests to gabbi * [doc] Note lack of constraints is a choice * gabbi: remove unused variable -* rest: catch create_metric duplicate +* rest: catch create\_metric duplicate * Revert "add mysql minimum version check" * add mysql minimum version check * All granularity input should be parsed as timespan @@ -125,13 +181,13 @@ * Enable H904 hacking check * storage: split swift driver * doc: add a page talking about collectd support -* Enable oslo_middleware healthcheck middleware by default +* Enable oslo\_middleware healthcheck middleware by default * use datetime when defining series range * storage: split ceph driver * storage: split file driver * fix oslo.db 4.15.0 breakage -* rest: remove user_id and project_id from metric schema -* api: use egg entry_point rather than code path +* rest: remove user\_id and project\_id from metric schema +* api: use egg entry\_point rather than code path * doc: Add reference to gnocchi-nagios tool * fix logging.. * rest: don't ignore measures of created metrics @@ -141,11 +197,11 @@ * config: only include oslo.middleware options that are shipped * doc: remove unused links * file: remove tmp configuration -* storage: remove _pending_measures_to_process_count() -* storage: split process_new_measures() +* storage: remove \_pending\_measures\_to\_process\_count() +* storage: split process\_new\_measures() * Remove 95pct and median from default archive policies * rest: wait for the thread pool executor result -* indexer: list_metric(), skip sql if names is empty +* indexer: list\_metric(), skip sql if names is empty * rest: Don't use private webob API * rest: fix batching error handling * rest: don't fail if the batch measure is not a dict @@ -154,9 +210,9 @@ * utils: do not retry on any exception * Remove usage of deprecated operatorPrecedence and remove duplicate operators * drop pytimeparse requirement -* support pandas.to_timedelta +* support pandas.to\_timedelta * Bump hacking to 0.12 -* rest: use flatten_dict_to_keypairs instead of recursive_keypairs +* rest: use flatten\_dict\_to\_keypairs instead of recursive\_keypairs * carbonara: add support for Gnocchi v2 measures format * support resampling on aggregation endpoint * support resampling @@ -164,7 +220,7 @@ * rest: allow to create missing metrics when sending measures in batch * Interpolate strings using logging own methods * metricd: retry slowly coordination connection failure -* rest: don't use is_body_seekable +* rest: don't use is\_body\_seekable * Allow timespan to be floating values * rest: empty search query in resource search * ceph: move out of xattr completely @@ -177,20 +233,20 @@ * Update doc because default services are all being added to settings * accommodate new oslo.config * devstack: stop all gnocchi services, not just api -* Fix incorrect EXTRA_FLAVOR in plugin.sh +* Fix incorrect EXTRA\_FLAVOR in plugin.sh * rest: using ujson to deserialize * doc: add s3 to the list of Carbonara based drivers * Fix typo in release note file * Unify timestamp parsing * rest: fix Epoch timestamp parsing -* test: rewrite test_post_unix_timestamp in Gabbi -* Remove pecan_debug option -* test: allow to pass OS_DEBUG +* test: rewrite test\_post\_unix\_timestamp in Gabbi +* Remove pecan\_debug option +* test: allow to pass OS\_DEBUG * Remove unused requests dependency * carbonara: fix SplitKey with datetime greater than 32bits value * Revert "Remove the file named MANIFEST.in" * Add helper for utcnow to epoch nano -* Add http_proxy_to_wsgi to api-paste +* Add http\_proxy\_to\_wsgi to api-paste * Use Cotyledon oslo config glue * Add a S3 based storage driver * tox: only install all storage drivers in py-$index or py-#index-all @@ -207,10 +263,10 @@ * tests: Cover resource-type modification * Use xx=None instead of xx=[] to initialize the default value * Fix some gabbi tests -* Add STRICT_RESPONSE_HEADERS check to gabbi tests +* Add STRICT\_RESPONSE\_HEADERS check to gabbi tests * Add upgrade targets for Gnocchi 3.0 * tox: shorter envdir name for upgrade target -* remove the pandas module in test test_carbonara.py +* remove the pandas module in test test\_carbonara.py * doc: include stable/3.0 release notes * Fix typos in tests/gabbi/gabbits/resource.yaml * Add granularity in searching for values in metrics @@ -218,18 +274,18 @@ * Replace retrying with tenacity * ceph: fix python3 issue * Add simple upgrade tests -* compute new first_block_timestamp once +* compute new first\_block\_timestamp once 3.0.0 ----- * track the metric locked time * Fix gnocchi-metricd shutdown -* resource_type: check that min is not None before comparing with max +* resource\_type: check that min is not None before comparing with max * cli: do not run tooz watchers in parallel * metricd: fix a data type inconsistent bug * ceph: rename optional extra names -* Fix PostgreSQL migration script with resource_type_state_enum +* Fix PostgreSQL migration script with resource\_type\_state\_enum * carbonara-drivers: elapsed can be zero * Fix a typo in sqlalchemy.py * devstack: rename werkzeug to simple @@ -258,7 +314,7 @@ * share groupings across aggregates * Put the regex first * swift: switch default auth version to 3 -* tests/carbonara: use _serialize_v2 without mocking +* tests/carbonara: use \_serialize\_v2 without mocking * carbonara: Timeserie.aggregate * carbonara: optimize uncompressed serialization * carbonara: compress non padded timeseries @@ -277,7 +333,7 @@ * test: fix a random failure with metric listing * storage: return list of split as a set * swift: remove retrying code -* carbonara: factorize out _get_unaggregated_timeserie_and_unserialize +* carbonara: factorize out \_get\_unaggregated\_timeserie\_and\_unserialize * correct the debug log info, add a blank in log info * drop non-I/O threading in upgrade * paginate on upgrade @@ -286,7 +342,7 @@ * storage: make sure the deletion tests are synchronous * storage: do not list metrics on each measure processing * storage: add an intermediate verification -* carbonara: expose first_block_timestamp as public +* carbonara: expose first\_block\_timestamp as public * Fix Gnocchi tempest.conf generation * ceph: fix write emulation * Remove null drivers @@ -317,26 +373,26 @@ ----- * drop v1.3 to v2.x migration, drop TimeSerieArchive -* sqlalchemy: increase the number of max_retries +* sqlalchemy: increase the number of max\_retries * test: fix race condition in update testing * add support for coordination * tests: extend the test timeout to 120s for migration sync testing * Add home-page in setup.cfg * carbonara: embed a benchmark tool -* carbonara: do not use oslo_log -* indexer: put extend_existing in __tables_args__ +* carbonara: do not use oslo\_log +* indexer: put extend\_existing in \_\_tables\_args\_\_ * improve task distribution * sqlalchemy: simplify kwarg of retry * Add iso8601 to requirements * metricd: cleanup logging message for progress -* sqlalchemy: remove deprecated kwargs retry_on_request -* sqlalchemy: fix PostgreSQL transaction aborted in unmap_and_delete_tables +* sqlalchemy: remove deprecated kwargs retry\_on\_request +* sqlalchemy: fix PostgreSQL transaction aborted in unmap\_and\_delete\_tables * Fix list resource race * rest: set useful default values for CORS middleware * rest: enable CORS middleware without Paste * truncate AggregatedTimeSerie on init * return explicitly InvalidPagination sort key -* fix object_exists reference +* fix object\_exists reference * Indicate we added a bunch of new features * doc: Update grafana plugin documentation * metricd: use Cotyledon lib @@ -353,19 +409,19 @@ * Use pbr WSGI script to build gnocchi-api * ceph: change make method names for new measures * Expose resource type state to the API -* track resource_type creation/deletion state +* track resource\_type creation/deletion state * carbonara: compress all TimeSerie classes using LZ4 * separate cleanup into own worker * Tuneup gabbi metric.yaml file to modern standards -* Tuneup gabbi resource_type.yaml file to modern standards -* Tuneup gabbi search_metric.yaml file to modern standards -* Tuneup gabbi resource_aggregation.yaml file to modern standards +* Tuneup gabbi resource\_type.yaml file to modern standards +* Tuneup gabbi search\_metric.yaml file to modern standards +* Tuneup gabbi resource\_aggregation.yaml file to modern standards * Tuneup gabbi resource.yaml file to modern standards -* sqlalchemy: fix MySQL error handling in list_resources -* _carbonara: use tooz heartbeat management -* _carbonara: set default aggregation_workers_number to 1 +* sqlalchemy: fix MySQL error handling in list\_resources +* \_carbonara: use tooz heartbeat management +* \_carbonara: set default aggregation\_workers\_number to 1 * Enable CORS by default -* Rename gabbits with _ to have - instead +* Rename gabbits with \_ to have - instead * Correct concurrency of gabbi tests for gabbi 1.22.0 * tests: fix Gabbi live test to not rely on legacy resource types * swift: force retry to 1 @@ -373,46 +429,46 @@ * Added endpoint type on swift configuration * use async delete when remove measures * Fix tempest tests that use SSL -* _carbonara: fix race condition in heartbeat stop condition +* \_carbonara: fix race condition in heartbeat stop condition * enable pagination when querying metrics -* doc: include an example with the `like' operator +* doc: include an example with the \`like' operator * metricd: only retry on attended errors and print error when coordinator fails * metricd: no max wait, fix comment * test: move root tests to their own class * rest: report dynamic aggregation methods in capabilities in a different field -* _carbonara: stop heartbeat thread on stop() +* \_carbonara: stop heartbeat thread on stop() * tests: create common resources at class init time -* tests: remove skip_archive_policies_creation +* tests: remove skip\_archive\_policies\_creation * tests: do not create legacy resources -* sqlalchemy: add missing constraint delete_resource_type() +* sqlalchemy: add missing constraint delete\_resource\_type() * sqlalchemy: no fail if resources and type are deleted under our feet * sqlalchemy: retry on PostgreSQL catalog errors too -* sqlalchemy: retry on deadlock in delete_resource_type() +* sqlalchemy: retry on deadlock in delete\_resource\_type() * Enable releasenotes documentation * Tuneup gabbi transformedids.yaml file to modern standards * Tuneup gabbi search.yaml file to modern standards * Tuneup gabbi pagination.yaml file to modern standards -* Tuneup gabbi metric_granularity.yaml file to modern standards +* Tuneup gabbi metric\_granularity.yaml file to modern standards * raise NoSuchMetric when deleting metric already marked deleted * Tuneup gabbi history.yaml file to modern standards -* Tuneup gabbi batch_measures.yaml file to modern standards +* Tuneup gabbi batch\_measures.yaml file to modern standards * Tuneup gabbi base.yaml file to modern standards * Tuneup gabbi async.yaml file to modern standards * Tuneup gabbi archive.yaml file to modern standards -* Tuneup gabbi archive_rule.yaml file to modern standards +* Tuneup gabbi archive\_rule.yaml file to modern standards * Tuneup gabbi aggregation.yaml file to modern standards * fix some typos in doc, comment & code * add unit column for metric * devstack: ensure grafana plugin for 2.6 is installed * Revert "tests: protect database upgrade for gabbi tests" * Make tempest tests compatible with keystone v3 -* sqlalchemy: retry on deadlock for create_resource_type() -* sqlalchemy: retry on deadlocks in get_resource() -* sqlalchemy: avoid deadlock on list_metrics() -* sqlalchemy: retry on deadlock for create_metric() -* sqlalchemy: retry on deadlock for create_resource() -* sqlalchemy: add retry on deadlock for delete_resource() -* sqlalchemy: set max_retries & all when retrying +* sqlalchemy: retry on deadlock for create\_resource\_type() +* sqlalchemy: retry on deadlocks in get\_resource() +* sqlalchemy: avoid deadlock on list\_metrics() +* sqlalchemy: retry on deadlock for create\_metric() +* sqlalchemy: retry on deadlock for create\_resource() +* sqlalchemy: add retry on deadlock for delete\_resource() +* sqlalchemy: set max\_retries & all when retrying * tests: move custom agg setup code in the tests using it * tests: protect database upgrade for gabbi tests * fix details filter for measures report @@ -433,14 +489,14 @@ * [alembic] delete a blank line from script.py.mako * Fix uuidgen not installed in some ubuntu installs * Remove annoying debug log -* Replace logging with oslo_log +* Replace logging with oslo\_log * use thread safe fnmatch -* fix resource_type tablename for instance_net_int +* fix resource\_type tablename for instance\_net\_int * tests: Add more integration tests coverage * Drop useless enum * Don't delete archive policy used by ap rule * Reduce length of some foreign keys -* Fix foreignkey names of host/host_history table +* Fix foreignkey names of host/host\_history table * doc: add resource history in features * doc: remove legacy resource types listing * Fix broken ceilometer resources migration script @@ -466,14 +522,14 @@ * dict.iteritems() method is not available in py3 * carbonara: catch LockAcquireFailed exception * devstack: rename UWSGI file -* fix resource_type table migration +* fix resource\_type table migration * ceph: make requirements clearer * devstack: remove useless ceph permission * devstack: Allow to use devstack-plugin-ceph * Fix --version string on all command line tools * resample only data affected by new measures * cleanup split function -* remove timeserie_filter param +* remove timeserie\_filter param * InfluxDB: drop support * devstack: allow gnocchi-api to run on different host from keystone * Add some resource types tests @@ -492,7 +548,7 @@ * Fix an IN-predicate SAWarning * Added docs about new snmp related resource types * Revert "Log retrieve/store data speed in Carbonara based drivers" -* workaround to _strptime import issue on py2 +* workaround to \_strptime import issue on py2 * Add note to the docs regarding archive-policy deletion * Replace deprecated LOG.warn with LOG.warning * doc: fix link to config generator conf file @@ -527,16 +583,16 @@ * Use '#flake8: noqa' to skip file check * storage: make sure we delete old measures respecting archive policy * rest: implement groupby in resource/metric aggregation -* indexer: replace get_metrics() by list_metrics() +* indexer: replace get\_metrics() by list\_metrics() * partition unprocessed measures across workers * Allows to use cradox with ceph storage * Log retrieve/store data speed in Carbonara based drivers * shrink test length -* storage: autoconfigure coordination_url +* storage: autoconfigure coordination\_url * carbonara: compress AggregatedTimeSerie using LZ4 * minimise calls when generating report * Update configuration document -* storage: fix typo in Metric.__eq__ +* storage: fix typo in Metric.\_\_eq\_\_ * Rework the handling of the resource ID * carbonara: make sampling mandatory in AggregatedTimeSerie * ceph: remove useless code @@ -545,7 +601,7 @@ * Don't use time.timezone * ceph: delete measures asynchronously * grab less data when adding measures -* KEYSTONE_CATALOG_BACKEND is deprecated +* KEYSTONE\_CATALOG\_BACKEND is deprecated * Extend measures batching to named metrics * minimise swift processing metric size * carbonara: compute sampling at init time @@ -557,10 +613,10 @@ * Remove useless indexes * devstack: fix type when keystone is absent * Renable lint for WebTest -* Added original_resource_id field into resource +* Added original\_resource\_id field into resource * Bypass the auth when listing Gnocchi versions * setup: build config file at build time -* storage: run expunge_metric in sync mode +* storage: run expunge\_metric in sync mode * swift, ceph: take smaller batch of new measures to process * tests: remove measures reporting test * devstack: support publicURL retrieval in both keystone v2/v3 format @@ -584,15 +640,15 @@ * Use overtest to setup InfluxDB * Handle all resources type with one controller * Generate configuration file in sdist -* Resource list filtered by project_id or created_by user_id and project_id +* Resource list filtered by project\_id or created\_by user\_id and project\_id * influxdb: do not try to create a database * devstack: remove Ceilometer support * devstack: remove Aodh support -* indexer: always order the result returned by list_metrics() -* Fix the wrong datetime format in _store_measures method +* indexer: always order the result returned by list\_metrics() +* Fix the wrong datetime format in \_store\_measures method * ceph: fix the metric list to process with new measures * Do not enable Keystone by default -* Fix the typos in the __init__.py +* Fix the typos in the \_\_init\_\_.py * Use keystone middleware fixture * Replace the redundant code with the utility function * Fix gnocchi resource update without change @@ -603,7 +659,7 @@ * doc: add granularity argument for measure retrieval * carbonara: allow to split AggregatedTimeSerie * carbonara: allow to create TimeSerie from existing ts -* storage: round back from_timestamp in get_measures() +* storage: round back from\_timestamp in get\_measures() * carbonara: serialize time period in seconds * carbonara: deprecate TimeSerieArchive * doc: add Grafana support @@ -614,7 +670,7 @@ * doc: split out running * Upgrade Grafana and its Gnocchi plugins * indexer: read from environment variable -* tests: read config file if GABBI_LIVE is set +* tests: read config file if GABBI\_LIVE is set * sqlalchemy: fix metric expunge * Miscellaneous minor docco corrections * Fix the typos in the Gnocchi @@ -639,58 +695,58 @@ * swift: make sure we retrieve full listing in containers * Switch to RTD theme for gnocchi.xyz * tox: create a target for each indexer -* Allow the volume display_name field to be null +* Allow the volume display\_name field to be null * storage/carbonara: simplify tooz locking * tests: block when acquiring processing lock -* storage: fix expunge_metric +* storage: fix expunge\_metric * Resolving use of deprecated dispatcher configuration * devstack: add gnocchi-statsd * statsd have some required configuration options * tox: exclude .eggs in flake8 * influxdb: avoid running first query if unnecessary -* Correlate correctly the needed_overlap +* Correlate correctly the needed\_overlap * Adds some docs about aggregation across metrics -* Checks percent_of_overlap when one boundary is set +* Checks percent\_of\_overlap when one boundary is set * Adds aggregation across metrics tests * carbonara: move aggregated() to AggregatedTimeSerie * storage: make exception inherits from StorageError * storage: retry to delete metric on failure * ceph: delete unaggregated timeserie when deleting metric -* tests: improve fake Swift client delete_container +* tests: improve fake Swift client delete\_container * Remove keystonemiddleware workaround * Make the wheel universal -* fix expunge_metrics method +* fix expunge\_metrics method * remove duplicate test * statsd: fix flush() scheduling * retrieve resource with metric only when needed * only get details when required -* fix pecan _lookup usage -* allow image_ref equals none when creating resource +* fix pecan \_lookup usage +* allow image\_ref equals none when creating resource * MySQL: fix testing with MySQL >= 5.7.9 -* change DB, allow image_ref to be null +* change DB, allow image\_ref to be null * fix error in alembic when upgrade postgresql * Split requirements in smaller part * typos in rest.j2 * increase number of wsgi threads * clean up integration test urls -* carbonara: add a __repr__ for AggregatedTimeSerie +* carbonara: add a \_\_repr\_\_ for AggregatedTimeSerie * carbonara: implement an integer sampling attribute * carbonara: make offset conversion consistent -* archive_policy: enforce types -* _carbonara: dedicated methods to store raw timeserie +* archive\_policy: enforce types +* \_carbonara: dedicated methods to store raw timeserie * cli: allow to upgrade in 2 passes * storage: support storage upgrade * Rename dbsync to upgrade * Add missing PrettyTable dependency * ceph: fix computation of read offset * Ensure file basepath exists -* Use oslo_config new type PortOpt for port options -* carbonara: optimize _first_block_timestamp +* Use oslo\_config new type PortOpt for port options +* carbonara: optimize \_first\_block\_timestamp * Add host and port config opts to statsd * copyright Openstack should be OpenStack * set indexer url as required * Remove unused logging -* tests: move storage/test_carbonara to storage +* tests: move storage/test\_carbonara to storage * influxdb: skip Carbonara specific test * influxdb: update to 0.9.4.2 * Make swift timeout configurable @@ -707,9 +763,9 @@ * rest: remove outdated comments * storage,rest: allow to retrieve one granularity only -* archive_policy: make sure points/granularity > 0 -* archive_policy: disallow to have identical granularities -* rest: add_measures never raises MetricDoesNotExist +* archive\_policy: make sure points/granularity > 0 +* archive\_policy: disallow to have identical granularities +* rest: add\_measures never raises MetricDoesNotExist * rest: do not store empty measure set * Fix minimum Pandas required version * carbonara: handle empty update @@ -719,16 +775,16 @@ * docs: add some ceph driver notes * rest: deserialize directly with file descriptor * devstack: remove unused utf8 argument -* Use mod_wsgi for SWIFT +* Use mod\_wsgi for SWIFT * Add README file for Devstack plugin * Use config file for oslo-config-generator instead of generate-config-file.sh * Make sure that swift doesn't block gnocchi startup -* Use ListOpt for default_aggregation_methods option -* carbonara: replace sort() by sort_values() +* Use ListOpt for default\_aggregation\_methods option +* carbonara: replace sort() by sort\_values() * Use lighter validation to post measurements * docs: add some notes about tooz * ceph: Fix tooz coordinator connection leaks -* sqlalchemy: use retry_on_deadlock in update_resource() +* sqlalchemy: use retry\_on\_deadlock in update\_resource() * doc: improve the layout and gnocchi.xyz * indexer: fix exception text * Use a capital letter in the project name @@ -741,8 +797,8 @@ * carbonara: add more debug info on measures processing * match archive policy rule based on longest match * rest: use a fork friendly app with werkzeug -* add api controller for instance_disk and network_interface resources -* devstack: stop using USE_CONSTRAINTS and setup_package +* add api controller for instance\_disk and network\_interface resources +* devstack: stop using USE\_CONSTRAINTS and setup\_package * devstack: use gnocchiclient to create default archive policies * devstack: install gnocchiclient from pip * carbonara: optimize resampling @@ -755,8 +811,8 @@ * cli: fix reported values * Use the testr from os-testr env * rest: export overall status -* storage: remove index from measures_report() -* Create signature of measures_report in base class +* storage: remove index from measures\_report() +* Create signature of measures\_report in base class * Make metric deletion async * Mark InfluxDB driver as experimental * tox: fix too much test running in specific envs @@ -769,8 +825,8 @@ * devstack: install gnocchiclient * gate: use port 8041 * Remove useless code -* Ensure needed_overlap is a number -* tox: Allow to pass some OS_* variables +* Ensure needed\_overlap is a number +* tox: Allow to pass some OS\_\* variables 1.2.0 ----- @@ -791,8 +847,8 @@ * devstack: fix Keystone CORS configuration * devstack: umount grafana datasource plugins on unstack * Add support for Grafana installation -* Remove $GNOCCHI_USE_KEYSTONE and check that Keystone is enabled -* indexer: remove details on create_metric() +* Remove $GNOCCHI\_USE\_KEYSTONE and check that Keystone is enabled +* indexer: remove details on create\_metric() * carbonara: fix % of overlap when no point match * devstack: enable debug if asked * carbonara: add some logging @@ -807,14 +863,14 @@ * devstack: remove ceilometer from service list * Use new location of subunit2html * metricd: fix method name typo -* devstack: explicit path in run_process calls -* Turn on influxdb in gate_hook -* In test_rest run process_measures after a request +* devstack: explicit path in run\_process calls +* Turn on influxdb in gate\_hook +* In test\_rest run process\_measures after a request * Only exclude Alembic 0.8.1 * Create conf directory during install phase * archive policy rule: make them available on all metric creations -* rest: remove convert_metric() -* file: use _get_tempfile() in metric storage +* rest: remove convert\_metric() +* file: use \_get\_tempfile() in metric storage * file: store measures atomically * rest: implements /resource/generic/UUID/history * Exclude Alembic>=0.8.1 @@ -822,7 +878,7 @@ * rest: Pass the project name to middleware config * Rudimentary support for influxdb in devstack plugin * file: fix potential race condition in storing measure -* storage: remove create_metric() +* storage: remove create\_metric() * devstack: fix gnocchi url with aodh * indexer: raise an error if deleting a non-existent metric * Make pagination gabbi test no longer xfail @@ -837,14 +893,14 @@ * indexer: remove wrong FK catch * sqlalchemy: use DBReferenceError to generate the correct exception * storage/carbonara: add timestamp as measure suffix -* archive_policy: validate agg methods values +* archive\_policy: validate agg methods values * rest: fix archive policy controller -* rest: return metrics for ..//metric +* rest: return metrics for ..//metric * rest: Add links to /v1 endpoint -* Implements list_resources limit/ordering -* devstack: use $API_WORKERS to set the number of Apache WSGI workers -* indexer: always eagerly load archive_policy -* rest: directly pass metric to search_value() +* Implements list\_resources limit/ordering +* devstack: use $API\_WORKERS to set the number of Apache WSGI workers +* indexer: always eagerly load archive\_policy +* rest: directly pass metric to search\_value() * rest: validate timestamp in metric value search query * Drop downgrade field in alembic script.py.mako * Declare some options as secret @@ -857,7 +913,7 @@ * file: do not raise if dir is created in the meantime * Increase gabbi poll length in two tests * sqlalchemy: expunge objects before returning them -* statsd: stop mocking indexer.get_metrics in tests +* statsd: stop mocking indexer.get\_metrics in tests * Don't allow duplicate timestamps in carbonara series * Remove special configuration of heat plugin * metricd: allow to be killed by SIGTERM @@ -874,16 +930,16 @@ * Fix gnocchi-metricd start * Make devstack use Keystone API v3 for user/project creation * Update pytimeparse to 1.1.5 -* Fix abstract `process_measures` method signature +* Fix abstract \`process\_measures\` method signature * Allow gnocchi API at /metric -* Ensure location header urls account for script_name +* Ensure location header urls account for script\_name * Don't reclone the repo we already did * Ensure live gabbi tests run in gate * carbonara: add a serializable mixin -* archive_policy: add max_block_size property +* archive\_policy: add max\_block\_size property * Use a more unique postgresql port * metricd: start several processes to process more metric -* Make test_rest keystone cache work with keystonemiddleware 2.0 +* Make test\_rest keystone cache work with keystonemiddleware 2.0 * utils: replace utcnow() by new oslo.utils version * Introduce gnocchi-metricd * Use indexer RDBMS as tooz coordinators @@ -899,31 +955,31 @@ * Stop using TZ unaware datetime and isotime() * Require pecan 0.9, gaining 405 and unicode fixes * Handle indexer connections more cleanly in tests -* Use pg_ctl for initdb +* Use pg\_ctl for initdb * Raise the PostgreSQL connections to 200 + Remove AvoidDictInterface -* dispatcher:fix func of _match_metric +* dispatcher:fix func of \_match\_metric * Switch to PyMySQL and enable MySQL on py34 * indexer: enable MySQL schema migration test * Move SQL backends to requirements * Add missing License file * Add more tests for archive policy rule -* Remove redundant metric tests from test_rest +* Remove redundant metric tests from test\_rest * pass LANG into testing environment * Run functional tests using virtualenvs in devstack * Remove redundancy in ArchivePoliciesController tests -* Change the type of flavor_id from int to string +* Change the type of flavor\_id from int to string * Remove unreachable return statement * Add more gabbi test coverage for metrics * Ensure the indexer is disconnected in gabbi fixtures * tests: start/stop coord before using it * Sync requirements between Python and Python 3 -* Cover and correct unicode handling in mod_wsgi setup +* Cover and correct unicode handling in mod\_wsgi setup * Install redis-server if needed * indexer: use binary UUIDType * sqlalchemy: use RDBMS check constraint where available * Add Alembic support * sqlalchemy: name FK constraints -* sqlalchemy: resource_history.id is not nullable +* sqlalchemy: resource\_history.id is not nullable * sqlalchemy: remove 'metric' from enum type * Create a project long description in README file * Add gabbi tests for the NamedMetricController @@ -932,7 +988,7 @@ * Remove usage of dict for gnocchi object * Generate configuration file in default tox target * Filter more swift metrics -* Load archive_policy named resource metric +* Load archive\_policy named resource metric * Add html rendering for measurements of aggregation * ceilometer: Use http session in dispatcher * devstack: Change default backend to file @@ -948,13 +1004,13 @@ * rest: add a test for measures with no mean agg method in AP * Fix resources rendering in web browsers * rest: fix access to metric for owned resources -* Fix gnocchi-statsd can't start by flush_delay +* Fix gnocchi-statsd can't start by flush\_delay * doc: fix typo about duration/lifespan * doc: create metrics when creating a resource * Update doc about storage/indexer drivers -* Fix value for default_archive_policy +* Fix value for default\_archive\_policy * Include Ceilometer dispatcher YAML file in tarball -* Adds missed history param into IndexerDriver list_resource method +* Adds missed history param into IndexerDriver list\_resource method * Return detailed metric info on create * Use Archive Policy Rule in create metric api * Fix typo in policy name @@ -979,7 +1035,7 @@ * Don't touch the sqla orm object * gabbi: merge metric and metrics test files * rest: pass the whole metric to aggregation -* rest: do not use expect_error in tests +* rest: do not use expect\_error in tests * rest: enhance metric retrieval in controllers * Adding Gabbi Tests to Metric(s) API * indexer: return resource objects rather than dict @@ -987,7 +1043,7 @@ * rest: allow to search for metric value * Move common resource attributes into a mixin * Add gabbi tests to cover the ArchivePoliciesController -* Adjust ResourcesController to provide get_all() +* Adjust ResourcesController to provide get\_all() * Update resource.yaml to reflect recent fixes * rest: Add list of resources types URL on /v1/resource * Only query metrics by uuid if we have uuids @@ -997,7 +1053,7 @@ * Don't use static host string in dispatcher test * Fix gnocchi repository url * Update .gitreview for project rename -* storage: pass query rather than predicate in value_search +* storage: pass query rather than predicate in value\_search * rest: add more operators in complex queries * swift: retry if content length is 0 * tests: drop testscenarios usage for storage @@ -1025,10 +1081,10 @@ * Add ipmi resource * Add identity resource * Add network resource -* Add ceph_account resource +* Add ceph\_account resource * refactor indexer.sqlalchemy * Remove useless cachetools -* Use six.text_type() to convert exceptions +* Use six.text\_type() to convert exceptions * test: Use gabbi tests on live gnocchi-api * Add AggregationDoesNotExist exception * Don't use MultiStrOpt @@ -1040,7 +1096,7 @@ * rest: uses search query to filter on metric aggregation * rest: change metric aggregation URL * test: remove useless variable -* test: move a test_capabilities out of scenarios +* test: move a test\_capabilities out of scenarios * rest: remove PATCH schemas * storage: add support for value research * devstack: fix creation of the swift reseller admin @@ -1057,8 +1113,8 @@ * rest: run the tests with no auth * rest: replace list+filter by search mechanism * Add a chart for each metric when viewing a resource in a browser -* Add HTML rendering for "/v1/resource/generic/" URL -* indexer: allow created_by_{user,project}_id to be null +* Add HTML rendering for "/v1/resource/generic/" URL +* indexer: allow created\_by\_{user,project}\_id to be null * gabbi: add "" around some strings * Fix uuid format of post resource * carbonara: percentile must be a float @@ -1069,7 +1125,7 @@ * Add a py27-cover tox target * Move oslotest from requirements to test-requirements * service: validate default values -* archive_policy: have a sane default list of agg methods +* archive\_policy: have a sane default list of agg methods * Base infrastructure to support gabbi tests * rest: implement complex query for resource listing * Put dbname in the path section of postgresql URL @@ -1077,7 +1133,7 @@ * Move and fix statsd option * Fix statsd documentation file extension * Add volume resource -* devstack: fix plugin with rename of get_or_add_user_role +* devstack: fix plugin with rename of get\_or\_add\_user\_role * storage: pass the archive policy in various methods * Initial statsd daemon implementation * tests: add a missing random metric name @@ -1106,7 +1162,7 @@ * Use --no-defaults with mysql database creation * carbonara: use dropna() rather than ~isnull() * carbonara: clean up useless code -* carbonara: add serialize_to_file() +* carbonara: add serialize\_to\_file() * carbonara: do not iterate over values * carbonara: set all values at once * Add Carbonara CLI tools @@ -1121,7 +1177,7 @@ * rest: allow to use relative timestamps * Remove useless requirements * Devstack plugin for Gnocchi -* Fix ResourceSchema to allow None user_id and project_id +* Fix ResourceSchema to allow None user\_id and project\_id * Improve database handling in tests * Adds missing argument in function call * Switch to file:// coordination by default @@ -1132,18 +1188,18 @@ * duplicate UUID method * Switch to oslo.log * rest: allow to render measures in HTML -* indexer: use ArchivePolicy in create_archive_policy() +* indexer: use ArchivePolicy in create\_archive\_policy() * Allows cross-metric-aggreg. with missing points * Use oslo.serialization.jsonutils to serialize JSON * Upgrade to hacking 0.10 -* Store aggregation_methods in ArchivePolicy +* Store aggregation\_methods in ArchivePolicy * Fix ceilometer dispacher to conform to the new name * tests: fix AP retrieval -* storage: change create_metric() to accept ArchivePolicy as argument -* Create an archive_policy module +* storage: change create\_metric() to accept ArchivePolicy as argument +* Create an archive\_policy module * indexer: only load metrics on explicit demand * Check RBAC policy on aggregated metric access -* rest: do not retrieve the metric twice in get_all() +* rest: do not retrieve the metric twice in get\_all() * Allow to retrieve several metrics at once from the indexer * rest: fix typo in metric policy * Disallow linking resources and metrics from different users @@ -1156,17 +1212,17 @@ * policy: fix creator rule * rest: convert UUID in token to correct format * rest: enhance enforce context -* Fix executor using in _carbonara +* Fix executor using in \_carbonara * rest: allow timespans to be integers * rest: return 404 on not found instead of 400 -* _carbonara: fix futures import +* \_carbonara: fix futures import * Update futures from global requirements * storage: switch default driver to file * opts: add missing oslo-incubator options * rest: check that the Content-Type is JSON * indexer: simplify resource ↔ metric relationship * rest: allow to delete archive policies -* indexer: add delete_archive_policy +* indexer: add delete\_archive\_policy * indexer, rest: simplify metric model * rest: allow to list metrics via GET /v1/metric * Remove sphinx from Python 3 requirements @@ -1189,7 +1245,7 @@ * Fix doc8 errors * extension for moving aggregates * indexer: fix typo in docstring -* rest, indexer: allow the {user,project}_id to be empty +* rest, indexer: allow the {user,project}\_id to be empty * rest: implement policy check for create entity * rest: add policy support for get entity * rest: add policy for delete entity @@ -1197,7 +1253,7 @@ * Rename Entity to Metric * ceilometer: remove useless variables from dispatcher * Add keystone support to ceilometer dispatcher -* rest: add and expose back_window attribute of archive policies +* rest: add and expose back\_window attribute of archive policies * Allows to filter out the gnocchi generated samples * Add a gnocchi dispatcher for ceilometer * Allow to run any version of Pecan, except 0.8 @@ -1205,31 +1261,31 @@ * rest: remove Carbonara exception leak * rest: add policy for get measures * rest: implement policy checks for post measures -* Switch posix_ipc dependency to sysv_ipc +* Switch posix\_ipc dependency to sysv\_ipc * Import policy from oslo-incubator * Update oslo-incubator * Fix and test the NullStorage driver * storage: merge coordination and carbonara * doc: fix typo in resource type type -* Add get_entity method in indexer +* Add get\_entity method in indexer * rest: allow to have infinite retention in policies * Minor readability improvements to carbonara -* storage: do not include to_timestamp in the range +* storage: do not include to\_timestamp in the range * Remove custom 204 response code setting * rest: validate archive policies definitions * carbonara: fix the default fetch() behavior * carbonara: fix archive back window * Remove assertEqual when request method has params for it * Move oslosphinx to requirements -* storage: multi-thread add_measure in Carbonara based drivers +* storage: multi-thread add\_measure in Carbonara based drivers * storage: factorize carbonara based drivers -* Add '?details=true' to GET '/v1/entity/' route +* Add '?details=true' to GET '/v1/entity/' route * postgresql: redirect logs to /dev/null * Add 'GET' routes to retrieve Entity informations * Import documentation -* Use 'pg_ctl' instead of 'postgres' utility to start and stop database +* Use 'pg\_ctl' instead of 'postgres' utility to start and stop database * Allow to filter resources on NULL values -* Empty gnocchi/tests/__init__.py +* Empty gnocchi/tests/\_\_init\_\_.py * Ensure Location header are string * Limit Pecan version, fix Pandas usage * rest: granularity is also a Timespan @@ -1239,30 +1295,30 @@ * Set a minimal version for oslo.utils * Use basepython instead of baseversion * Fix python3 gate issue -* Keep gnocchi/__init__.py empty +* Keep gnocchi/\_\_init\_\_.py empty * Fix typo in gnocchi wsgi script -* Add server_group to instance resource +* Add server\_group to instance resource * Don't try to load empty middleware -* rest: enable Keystone auth_token middleware by default +* rest: enable Keystone auth\_token middleware by default * Add config generator support * Update oslo-incubator * rest: add support for create/get archive policies * indexer: remove entities from Entity * tests: fix race condition on archive policies -* sqlalchemy: remove with_for_update() +* sqlalchemy: remove with\_for\_update() * indexer: store archive policies * Fix pep8 errors -* Switch user_id and project_id to be UUID +* Switch user\_id and project\_id to be UUID * Update to latest oslo-incubator * Allows to append entities to a resource * Ensures ipc semaphore are released in our tests -* Add swift_account resource +* Add swift\_account resource * Fix and re-renable Python 3 testing * Stop using oslo-incubator lockutils * storage: switch to ipc:// coordinator by default * Remove outdated option * Provide the gnocchi wsgi script -* rest: store archive_policy and export entities as resources +* rest: store archive\_policy and export entities as resources * Connect to database before upgrading it * Return 404 if patched resource doesn't exists * Stop using standard NotImplementedError to skip tests @@ -1300,7 +1356,7 @@ * Fix up examples in README.rst * Fix typo * Remove debug code -* rest: factorize resource_class +* rest: factorize resource\_class * rest: allow to PATCH instance attributes * tests: factorize patchable attribute testing * Remove architecture attribute from instance @@ -1309,30 +1365,30 @@ * tests: run on MySQL too * indexer: allow to update any attribute * indexer, rest: allow to filter on any resource attribute -* rest: allow to use project_id to filter projects +* rest: allow to use project\_id to filter projects * Fixed ReST syntax errors * Adds install instructions to Gnocchi README * Imposed string length limit in sqlalchemy.py -* indexer: allow to filter by project_id +* indexer: allow to filter by project\_id * sqlalchemy, tests: fix table creation race condition -* sqlalchemy: fix ended_before -* rest: allow to filter via user_id -* indexer: allow to filter by user_id +* sqlalchemy: fix ended\_before +* rest: allow to filter via user\_id +* indexer: allow to filter by user\_id * Limit pandas version * Ignore pbr generated files -* indexer, rest: add support for ended_before +* indexer, rest: add support for ended\_before * Port to Python 3 * Add gnocchi-dbsync command * Rename integ to PostrgeSQL * Add .gitreview file, do not run integ tests by default -* rest: allow to pass started_after in listing requests -* indexer: allow to retrieve resources with started_after +* rest: allow to pass started\_after in listing requests +* indexer: allow to retrieve resources with started\_after * rest: add resource listing -* indexer: implements list_resources() +* indexer: implements list\_resources() * rest: properly handle already existing resource * rest: support instance CRUD * Store resource type in SQL -* Factorize resource_type +* Factorize resource\_type * Allow to retrieve instances * Factorize more code * Rename 'resource' to 'generic' @@ -1341,9 +1397,9 @@ * Allow to create instances in the indexer * Add support for Unix timestamps in API * Fix timestamp issues + use transactions -* Handle patching of ended_at +* Handle patching of ended\_at * Fix voluptuous validation/conversion -* indexer: allow to update resource ended_at +* indexer: allow to update resource ended\_at * Handle patch with non existent entity * Correctly handle PATCH on non-existent resource * Use PATCH to modify entities @@ -1352,7 +1408,7 @@ * Return user/project in resources * Store start/end timestamps in resources * Actually allow to update entities only -* Add user_id and project_id to resources +* Add user\_id and project\_id to resources * Allow to GET resources * Allow to update resources * Fix indent diff -Nru gnocchi-3.1.2/debian/changelog gnocchi-3.1.9/debian/changelog --- gnocchi-3.1.2/debian/changelog 2017-06-16 12:44:38.000000000 +0000 +++ gnocchi-3.1.9/debian/changelog 2017-07-31 15:01:33.000000000 +0000 @@ -1,8 +1,14 @@ -gnocchi (3.1.2-0ubuntu7~cloud0) xenial-pike; urgency=medium +gnocchi (3.1.9-0ubuntu1~cloud0) xenial-pike; urgency=medium * New update for the Ubuntu Cloud Archive. - -- Openstack Ubuntu Testing Bot Fri, 16 Jun 2017 12:44:38 +0000 + -- Openstack Ubuntu Testing Bot Mon, 31 Jul 2017 15:01:33 +0000 + +gnocchi (3.1.9-0ubuntu1) artful; urgency=medium + + * New upstream release. + + -- James Page Mon, 31 Jul 2017 14:48:49 +0100 gnocchi (3.1.2-0ubuntu7) artful; urgency=medium diff -Nru gnocchi-3.1.2/devstack/plugin.sh gnocchi-3.1.9/devstack/plugin.sh --- gnocchi-3.1.2/devstack/plugin.sh 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/devstack/plugin.sh 2017-07-24 16:25:03.000000000 +0000 @@ -227,6 +227,7 @@ # Configure logging iniset $GNOCCHI_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL" + iniset $GNOCCHI_CONF metricd metric_processing_delay "$GNOCCHI_METRICD_PROCESSING_DELAY" # Set up logging if [ "$SYSLOG" != "False" ]; then diff -Nru gnocchi-3.1.2/devstack/settings gnocchi-3.1.9/devstack/settings --- gnocchi-3.1.2/devstack/settings 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/devstack/settings 2017-07-24 16:25:03.000000000 +0000 @@ -11,6 +11,7 @@ GNOCCHI_WSGI_DIR=${GNOCCHI_WSGI_DIR:-/var/www/gnocchi} GNOCCHI_DATA_DIR=${GNOCCHI_DATA_DIR:-${DATA_DIR}/gnocchi} GNOCCHI_COORDINATOR_URL=${GNOCCHI_COORDINATOR_URL:-redis://localhost:6379} +GNOCCHI_METRICD_PROCESSING_DELAY=${GNOCCHI_METRICD_PROCESSING_DELAY:-5} # GNOCCHI_DEPLOY defines how Gnocchi is deployed, allowed values: # - mod_wsgi : Run Gnocchi under Apache HTTPd mod_wsgi diff -Nru gnocchi-3.1.2/etc/gnocchi/gnocchi.conf gnocchi-3.1.9/etc/gnocchi/gnocchi.conf --- gnocchi-3.1.2/etc/gnocchi/gnocchi.conf 1970-01-01 00:00:00.000000000 +0000 +++ gnocchi-3.1.9/etc/gnocchi/gnocchi.conf 2017-01-11 17:22:04.000000000 +0000 @@ -0,0 +1,763 @@ +[DEFAULT] + +# +# From cotyledon +# + +# Enables or disables logging values of all registered options when starting a +# service (at DEBUG level). (boolean value) +# Note: This option can be changed without restarting. +#log_options = true + +# Specify a timeout after which a gracefully shutdown server will exit. Zero +# value means endless wait. (integer value) +# Note: This option can be changed without restarting. +#graceful_shutdown_timeout = 60 + +# +# From oslo.log +# + +# If set to true, the logging level will be set to DEBUG instead of the default +# INFO level. (boolean value) +# Note: This option can be changed without restarting. +#debug = false + +# DEPRECATED: If set to false, the logging level will be set to WARNING instead +# of the default INFO level. (boolean value) +# This option is deprecated for removal. +# Its value may be silently ignored in the future. +#verbose = true + +# The name of a logging configuration file. This file is appended to any +# existing logging configuration files. For details about logging configuration +# files, see the Python logging module documentation. Note that when logging +# configuration files are used then all logging configuration is set in the +# configuration file and other logging configuration options are ignored (for +# example, logging_context_format_string). (string value) +# Note: This option can be changed without restarting. +# Deprecated group/name - [DEFAULT]/log_config +#log_config_append = + +# Defines the format string for %%(asctime)s in log records. Default: +# %(default)s . This option is ignored if log_config_append is set. (string +# value) +#log_date_format = %Y-%m-%d %H:%M:%S + +# (Optional) Name of log file to send logging output to. If no default is set, +# logging will go to stderr as defined by use_stderr. This option is ignored if +# log_config_append is set. (string value) +# Deprecated group/name - [DEFAULT]/logfile +#log_file = + +# (Optional) The base directory used for relative log_file paths. This option +# is ignored if log_config_append is set. (string value) +# Deprecated group/name - [DEFAULT]/logdir +#log_dir = + +# Uses logging handler designed to watch file system. When log file is moved or +# removed this handler will open a new log file with specified path +# instantaneously. It makes sense only if log_file option is specified and +# Linux platform is used. This option is ignored if log_config_append is set. +# (boolean value) +#watch_log_file = false + +# Use syslog for logging. Existing syslog format is DEPRECATED and will be +# changed later to honor RFC5424. This option is ignored if log_config_append +# is set. (boolean value) +#use_syslog = false + +# Syslog facility to receive log lines. This option is ignored if +# log_config_append is set. (string value) +#syslog_log_facility = LOG_USER + +# Log output to standard error. This option is ignored if log_config_append is +# set. (boolean value) +#use_stderr = false + +# Format string to use for log messages with context. (string value) +#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s + +# Format string to use for log messages when context is undefined. (string +# value) +#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s + +# Additional data to append to log message when logging level for the message +# is DEBUG. (string value) +#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d + +# Prefix each line of exception output with this format. (string value) +#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s + +# Defines the format string for %(user_identity)s that is used in +# logging_context_format_string. (string value) +#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s + +# List of package logging levels in logger=LEVEL pairs. This option is ignored +# if log_config_append is set. (list value) +#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO + +# Enables or disables publication of error events. (boolean value) +#publish_errors = false + +# The format for an instance that is passed with the log message. (string +# value) +#instance_format = "[instance: %(uuid)s] " + +# The format for an instance UUID that is passed with the log message. (string +# value) +#instance_uuid_format = "[instance: %(uuid)s] " + +# Interval, number of seconds, of log rate limiting. (integer value) +#rate_limit_interval = 0 + +# Maximum number of logged messages per rate_limit_interval. (integer value) +#rate_limit_burst = 0 + +# Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG +# or empty string. Logs with level greater or equal to rate_limit_except_level +# are not filtered. An empty string means that all levels are filtered. (string +# value) +#rate_limit_except_level = CRITICAL + +# Enables or disables fatal status of deprecations. (boolean value) +#fatal_deprecations = false + + +[api] + +# +# From gnocchi +# + +# Path to API Paste configuration. (string value) +#paste_config = /Users/jd/Source/gnocchi/gnocchi/rest/api-paste.ini + +# Authentication mode to use. (string value) +# Allowed values: keystone, noauth +#auth_mode = noauth + +# The maximum number of items returned in a single response from a collection +# resource (integer value) +#max_limit = 1000 + + +[archive_policy] + +# +# From gnocchi +# + +# Default aggregation methods to use in created archive policies (list value) +#default_aggregation_methods = mean,min,max,sum,std,count + + +[cors] + +# +# From oslo.middleware.cors +# + +# Indicate whether this resource may be shared with the domain received in the +# requests "origin" header. Format: "://[:]", no trailing +# slash. Example: https://horizon.example.com (list value) +#allowed_origin = + +# Indicate that the actual request can include user credentials (boolean value) +#allow_credentials = true + +# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple +# Headers. (list value) +#expose_headers = + +# Maximum cache age of CORS preflight requests. (integer value) +#max_age = 3600 + +# Indicate which methods can be used during the actual request. (list value) +#allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH + +# Indicate which header field names may be used during the actual request. +# (list value) +#allow_headers = X-Auth-Token,X-Subject-Token,X-User-Id,X-Domain-Id,X-Project-Id,X-Roles + + +[cors.subdomain] + +# +# From oslo.middleware.cors +# + +# Indicate whether this resource may be shared with the domain received in the +# requests "origin" header. Format: "://[:]", no trailing +# slash. Example: https://horizon.example.com (list value) +#allowed_origin = + +# Indicate that the actual request can include user credentials (boolean value) +#allow_credentials = true + +# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple +# Headers. (list value) +#expose_headers = + +# Maximum cache age of CORS preflight requests. (integer value) +#max_age = 3600 + +# Indicate which methods can be used during the actual request. (list value) +#allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH + +# Indicate which header field names may be used during the actual request. +# (list value) +#allow_headers = X-Auth-Token,X-Subject-Token,X-User-Id,X-Domain-Id,X-Project-Id,X-Roles + + +[database] + +# +# From oslo.db +# + +# DEPRECATED: The file name to use with SQLite. (string value) +# Deprecated group/name - [DEFAULT]/sqlite_db +# This option is deprecated for removal. +# Its value may be silently ignored in the future. +# Reason: Should use config option connection or slave_connection to connect +# the database. +#sqlite_db = oslo.sqlite + +# If True, SQLite uses synchronous mode. (boolean value) +# Deprecated group/name - [DEFAULT]/sqlite_synchronous +#sqlite_synchronous = true + +# The back end to use for the database. (string value) +# Deprecated group/name - [DEFAULT]/db_backend +#backend = sqlalchemy + +# The SQLAlchemy connection string to use to connect to the database. (string +# value) +# Deprecated group/name - [DEFAULT]/sql_connection +# Deprecated group/name - [DATABASE]/sql_connection +# Deprecated group/name - [sql]/connection +#connection = + +# The SQLAlchemy connection string to use to connect to the slave database. +# (string value) +#slave_connection = + +# The SQL mode to be used for MySQL sessions. This option, including the +# default, overrides any server-set SQL mode. To use whatever SQL mode is set +# by the server configuration, set this to no value. Example: mysql_sql_mode= +# (string value) +#mysql_sql_mode = TRADITIONAL + +# Timeout before idle SQL connections are reaped. (integer value) +# Deprecated group/name - [DEFAULT]/sql_idle_timeout +# Deprecated group/name - [DATABASE]/sql_idle_timeout +# Deprecated group/name - [sql]/idle_timeout +#idle_timeout = 3600 + +# Minimum number of SQL connections to keep open in a pool. (integer value) +# Deprecated group/name - [DEFAULT]/sql_min_pool_size +# Deprecated group/name - [DATABASE]/sql_min_pool_size +#min_pool_size = 1 + +# Maximum number of SQL connections to keep open in a pool. Setting a value of +# 0 indicates no limit. (integer value) +# Deprecated group/name - [DEFAULT]/sql_max_pool_size +# Deprecated group/name - [DATABASE]/sql_max_pool_size +#max_pool_size = 5 + +# Maximum number of database connection retries during startup. Set to -1 to +# specify an infinite retry count. (integer value) +# Deprecated group/name - [DEFAULT]/sql_max_retries +# Deprecated group/name - [DATABASE]/sql_max_retries +#max_retries = 10 + +# Interval between retries of opening a SQL connection. (integer value) +# Deprecated group/name - [DEFAULT]/sql_retry_interval +# Deprecated group/name - [DATABASE]/reconnect_interval +#retry_interval = 10 + +# If set, use this value for max_overflow with SQLAlchemy. (integer value) +# Deprecated group/name - [DEFAULT]/sql_max_overflow +# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow +#max_overflow = 50 + +# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer +# value) +# Minimum value: 0 +# Maximum value: 100 +# Deprecated group/name - [DEFAULT]/sql_connection_debug +#connection_debug = 0 + +# Add Python stack traces to SQL as comment strings. (boolean value) +# Deprecated group/name - [DEFAULT]/sql_connection_trace +#connection_trace = false + +# If set, use this value for pool_timeout with SQLAlchemy. (integer value) +# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout +#pool_timeout = + +# Enable the experimental use of database reconnect on connection lost. +# (boolean value) +#use_db_reconnect = false + +# Seconds between retries of a database transaction. (integer value) +#db_retry_interval = 1 + +# If True, increases the interval between retries of a database operation up to +# db_max_retry_interval. (boolean value) +#db_inc_retry_interval = true + +# If db_inc_retry_interval is set, the maximum seconds between retries of a +# database operation. (integer value) +#db_max_retry_interval = 10 + +# Maximum retries in case of connection error or deadlock error before error is +# raised. Set to -1 to specify an infinite retry count. (integer value) +#db_max_retries = 20 + + +[healthcheck] + +# +# From oslo.middleware.healthcheck +# + +# DEPRECATED: The path to respond to healtcheck requests on. (string value) +# This option is deprecated for removal. +# Its value may be silently ignored in the future. +#path = /healthcheck + +# Show more detailed information as part of the response (boolean value) +#detailed = false + +# Additional backends that can perform health checks and report that +# information back as part of a request. (list value) +#backends = + +# Check the presence of a file to determine if an application is running on a +# port. Used by DisableByFileHealthcheck plugin. (string value) +#disable_by_file_path = + +# Check the presence of a file based on a port to determine if an application +# is running on a port. Expects a "port:path" list of strings. Used by +# DisableByFilesPortsHealthcheck plugin. (list value) +#disable_by_file_paths = + + +[incoming] + +# +# From gnocchi +# + +# Ceph pool name to use. (string value) +#ceph_pool = ${storage.ceph_pool} + +# Ceph username (ie: admin without "client." prefix). (string value) +#ceph_username = ${storage.ceph_username} + +# Ceph key (string value) +#ceph_secret = ${storage.ceph_secret} + +# Ceph keyring path. (string value) +#ceph_keyring = ${storage.ceph_keyring} + +# Ceph configuration file. (string value) +#ceph_conffile = ${storage.ceph_conffile} + +# Path used to store gnocchi data files. (string value) +#file_basepath = ${storage.file_basepath} + +# Swift authentication version to user. (string value) +#swift_auth_version = ${storage.swift_auth_version} + +# Swift pre-auth URL. (string value) +#swift_preauthurl = ${storage.swift_preauthurl} + +# Swift auth URL. (string value) +#swift_authurl = ${storage.swift_authurl} + +# Swift token to user to authenticate. (string value) +#swift_preauthtoken = ${storage.swift_preauthtoken} + +# Swift user. (string value) +#swift_user = ${storage.swift_user} + +# Swift user domain name. (string value) +#swift_user_domain_name = ${storage.swift_user_domain_name} + +# Swift key/password. (string value) +#swift_key = ${storage.swift_key} + +# Swift tenant name, only used in v2/v3 auth. (string value) +# Deprecated group/name - [incoming]/swift_tenant_name +#swift_project_name = ${storage.swift_project_name} + +# Swift project domain name. (string value) +#swift_project_domain_name = ${storage.swift_project_domain_name} + +# Prefix to namespace metric containers. (string value) +#swift_container_prefix = ${storage.swift_container_prefix} + +# Endpoint type to connect to Swift (string value) +#swift_endpoint_type = ${storage.swift_endpoint_type} + +# Connection timeout in seconds. (integer value) +# Minimum value: 0 +#swift_timeout = ${storage.swift_timeout} + +# S3 endpoint URL (string value) +#s3_endpoint_url = ${storage.s3_endpoint_url} + +# S3 region name (string value) +#s3_region_name = ${storage.s3_region_name} + +# S3 access key id (string value) +#s3_access_key_id = ${storage.s3_access_key_id} + +# S3 secret access key (string value) +#s3_secret_access_key = ${storage.s3_secret_access_key} + +# Prefix to namespace metric bucket. (string value) +#s3_bucket_prefix = ${storage.s3_bucket_prefix} + + +[indexer] + +# +# From gnocchi +# + +# Indexer driver to use (string value) +#url = + + +[keystone_authtoken] + +# +# From keystonemiddleware.auth_token +# + +# Complete "public" Identity API endpoint. This endpoint should not be an +# "admin" endpoint, as it should be accessible by all end users. +# Unauthenticated clients are redirected to this endpoint to authenticate. +# Although this endpoint should ideally be unversioned, client support in the +# wild varies. If you're using a versioned v2 endpoint here, then this should +# *not* be the same endpoint the service user utilizes for validating tokens, +# because normal end users may not be able to reach that endpoint. (string +# value) +#auth_uri = + +# API version of the admin Identity API endpoint. (string value) +#auth_version = + +# Do not handle authorization requests within the middleware, but delegate the +# authorization decision to downstream WSGI components. (boolean value) +#delay_auth_decision = false + +# Request timeout value for communicating with Identity API server. (integer +# value) +#http_connect_timeout = + +# How many times are we trying to reconnect when communicating with Identity +# API Server. (integer value) +#http_request_max_retries = 3 + +# Request environment key where the Swift cache object is stored. When +# auth_token middleware is deployed with a Swift cache, use this option to have +# the middleware share a caching backend with swift. Otherwise, use the +# ``memcached_servers`` option instead. (string value) +#cache = + +# Required if identity server requires client certificate (string value) +#certfile = + +# Required if identity server requires client certificate (string value) +#keyfile = + +# A PEM encoded Certificate Authority to use when verifying HTTPs connections. +# Defaults to system CAs. (string value) +#cafile = + +# Verify HTTPS connections. (boolean value) +#insecure = false + +# The region in which the identity server can be found. (string value) +#region_name = + +# Directory used to cache files related to PKI tokens. (string value) +#signing_dir = + +# Optionally specify a list of memcached server(s) to use for caching. If left +# undefined, tokens will instead be cached in-process. (list value) +# Deprecated group/name - [keystone_authtoken]/memcache_servers +#memcached_servers = + +# In order to prevent excessive effort spent validating tokens, the middleware +# caches previously-seen tokens for a configurable duration (in seconds). Set +# to -1 to disable caching completely. (integer value) +#token_cache_time = 300 + +# Determines the frequency at which the list of revoked tokens is retrieved +# from the Identity service (in seconds). A high number of revocation events +# combined with a low cache duration may significantly reduce performance. Only +# valid for PKI tokens. (integer value) +#revocation_cache_time = 10 + +# (Optional) If defined, indicate whether token data should be authenticated or +# authenticated and encrypted. If MAC, token data is authenticated (with HMAC) +# in the cache. If ENCRYPT, token data is encrypted and authenticated in the +# cache. If the value is not one of these options or empty, auth_token will +# raise an exception on initialization. (string value) +# Allowed values: None, MAC, ENCRYPT +#memcache_security_strategy = None + +# (Optional, mandatory if memcache_security_strategy is defined) This string is +# used for key derivation. (string value) +#memcache_secret_key = + +# (Optional) Number of seconds memcached server is considered dead before it is +# tried again. (integer value) +#memcache_pool_dead_retry = 300 + +# (Optional) Maximum total number of open connections to every memcached +# server. (integer value) +#memcache_pool_maxsize = 10 + +# (Optional) Socket timeout in seconds for communicating with a memcached +# server. (integer value) +#memcache_pool_socket_timeout = 3 + +# (Optional) Number of seconds a connection to memcached is held unused in the +# pool before it is closed. (integer value) +#memcache_pool_unused_timeout = 60 + +# (Optional) Number of seconds that an operation will wait to get a memcached +# client connection from the pool. (integer value) +#memcache_pool_conn_get_timeout = 10 + +# (Optional) Use the advanced (eventlet safe) memcached client pool. The +# advanced pool will only work under python 2.x. (boolean value) +#memcache_use_advanced_pool = false + +# (Optional) Indicate whether to set the X-Service-Catalog header. If False, +# middleware will not ask for service catalog on token validation and will not +# set the X-Service-Catalog header. (boolean value) +#include_service_catalog = true + +# Used to control the use and type of token binding. Can be set to: "disabled" +# to not check token binding. "permissive" (default) to validate binding +# information if the bind type is of a form known to the server and ignore it +# if not. "strict" like "permissive" but if the bind type is unknown the token +# will be rejected. "required" any form of token binding is needed to be +# allowed. Finally the name of a binding method that must be present in tokens. +# (string value) +#enforce_token_bind = permissive + +# If true, the revocation list will be checked for cached tokens. This requires +# that PKI tokens are configured on the identity server. (boolean value) +#check_revocations_for_cached = false + +# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm +# or multiple. The algorithms are those supported by Python standard +# hashlib.new(). The hashes will be tried in the order given, so put the +# preferred one first for performance. The result of the first hash will be +# stored in the cache. This will typically be set to multiple values only while +# migrating from a less secure algorithm to a more secure one. Once all the old +# tokens are expired this option should be set to a single value for better +# performance. (list value) +#hash_algorithms = md5 + +# Authentication type to load (string value) +# Deprecated group/name - [keystone_authtoken]/auth_plugin +#auth_type = + +# Config Section from which to load plugin specific options (string value) +#auth_section = + + +[metricd] + +# +# From gnocchi +# + +# Number of workers for Gnocchi metric daemons. By default the available number +# of CPU is used. (integer value) +# Minimum value: 1 +#workers = + +# How many seconds to wait between scheduling new metrics to process (integer +# value) +# Deprecated group/name - [storage]/metric_processing_delay +#metric_processing_delay = 60 + +# How many seconds to wait between metric ingestion reporting (integer value) +# Deprecated group/name - [storage]/metric_reporting_delay +#metric_reporting_delay = 120 + +# How many seconds to wait between cleaning of expired data (integer value) +# Deprecated group/name - [storage]/metric_cleanup_delay +#metric_cleanup_delay = 300 + + +[oslo_middleware] + +# +# From oslo.middleware.http_proxy_to_wsgi +# + +# Whether the application is behind a proxy or not. This determines if the +# middleware should parse the headers or not. (boolean value) +#enable_proxy_headers_parsing = false + + +[oslo_policy] + +# +# From oslo.policy +# + +# The file that defines policies. (string value) +# Deprecated group/name - [DEFAULT]/policy_file +#policy_file = policy.json + +# Default rule. Enforced when a requested rule is not found. (string value) +# Deprecated group/name - [DEFAULT]/policy_default_rule +#policy_default_rule = default + +# Directories where policy configuration files are stored. They can be relative +# to any directory in the search path defined by the config_dir option, or +# absolute paths. The file defined by policy_file must exist for these +# directories to be searched. Missing or empty directories are ignored. (multi +# valued) +# Deprecated group/name - [DEFAULT]/policy_dirs +#policy_dirs = policy.d + + +[statsd] + +# +# From gnocchi +# + +# The listen IP for statsd (string value) +#host = 0.0.0.0 + +# The port for statsd (port value) +# Minimum value: 0 +# Maximum value: 65535 +#port = 8125 + +# Resource UUID to use to identify statsd in Gnocchi (unknown value) +#resource_id = + +# DEPRECATED: User ID to use to identify statsd in Gnocchi (string value) +# This option is deprecated for removal. +# Its value may be silently ignored in the future. +#user_id = + +# DEPRECATED: Project ID to use to identify statsd in Gnocchi (string value) +# This option is deprecated for removal. +# Its value may be silently ignored in the future. +#project_id = + +# Creator value to use to identify statsd in Gnocchi (string value) +#creator = ${statsd.user_id}:${statsd.project_id} + +# Archive policy name to use when creating metrics (string value) +#archive_policy_name = + +# Delay between flushes (floating point value) +#flush_delay = 10 + + +[storage] + +# +# From gnocchi +# + +# Ceph pool name to use. (string value) +#ceph_pool = gnocchi + +# Ceph username (ie: admin without "client." prefix). (string value) +#ceph_username = + +# Ceph key (string value) +#ceph_secret = + +# Ceph keyring path. (string value) +#ceph_keyring = + +# Ceph configuration file. (string value) +#ceph_conffile = /etc/ceph/ceph.conf + +# Path used to store gnocchi data files. (string value) +#file_basepath = /var/lib/gnocchi + +# Swift authentication version to user. (string value) +#swift_auth_version = 1 + +# Swift pre-auth URL. (string value) +#swift_preauthurl = + +# Swift auth URL. (string value) +#swift_authurl = http://localhost:8080/auth/v1.0 + +# Swift token to user to authenticate. (string value) +#swift_preauthtoken = + +# Swift user. (string value) +#swift_user = admin:admin + +# Swift user domain name. (string value) +#swift_user_domain_name = Default + +# Swift key/password. (string value) +#swift_key = admin + +# Swift tenant name, only used in v2/v3 auth. (string value) +# Deprecated group/name - [storage]/swift_tenant_name +#swift_project_name = + +# Swift project domain name. (string value) +#swift_project_domain_name = Default + +# Prefix to namespace metric containers. (string value) +#swift_container_prefix = gnocchi + +# Endpoint type to connect to Swift (string value) +#swift_endpoint_type = publicURL + +# Connection timeout in seconds. (integer value) +# Minimum value: 0 +#swift_timeout = 300 + +# S3 endpoint URL (string value) +#s3_endpoint_url = + +# S3 region name (string value) +#s3_region_name = + +# S3 access key id (string value) +#s3_access_key_id = + +# S3 secret access key (string value) +#s3_secret_access_key = + +# Prefix to namespace metric bucket. (string value) +#s3_bucket_prefix = gnocchi + +# Number of workers to run during adding new measures for pre-aggregation +# needs. Due to the Python GIL, 1 is usually faster, unless you have high +# latency I/O (integer value) +# Minimum value: 1 +#aggregation_workers_number = 1 + +# Coordination driver URL (string value) +#coordination_url = + +# Storage driver to use (string value) +#driver = file diff -Nru gnocchi-3.1.2/gnocchi/archive_policy.py gnocchi-3.1.9/gnocchi/archive_policy.py --- gnocchi-3.1.2/gnocchi/archive_policy.py 2017-03-20 12:29:03.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/archive_policy.py 2017-07-24 16:25:03.000000000 +0000 @@ -175,6 +175,8 @@ self['timespan'] = None else: points = int(timespan / granularity) + if points <= 0: + raise ValueError("Calculated number of points is < 0") self['timespan'] = granularity * points else: points = int(points) diff -Nru gnocchi-3.1.2/gnocchi/carbonara.py gnocchi-3.1.9/gnocchi/carbonara.py --- gnocchi-3.1.2/gnocchi/carbonara.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/carbonara.py 2017-07-24 16:25:03.000000000 +0000 @@ -245,7 +245,7 @@ # NOTE(jd) Our whole serialization system is based on Epoch, and we # store unsigned integer, so we can't store anything before Epoch. # Sorry! - if self.ts.index[0].value < 0: + if not self.ts.empty and self.ts.index[0].value < 0: raise BeforeEpochError(self.ts.index[0]) return GroupedTimeSeries(self.ts[start:], granularity) diff -Nru gnocchi-3.1.2/gnocchi/cli.py gnocchi-3.1.9/gnocchi/cli.py --- gnocchi-3.1.2/gnocchi/cli.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/cli.py 2017-07-24 16:25:03.000000000 +0000 @@ -83,6 +83,13 @@ statsd_service.start() +# Retry with exponential backoff for up to 1 minute +_wait_exponential = tenacity.wait_exponential(multiplier=0.5, max=60) + + +retry_on_exception = tenacity.Retrying(wait=_wait_exponential).call + + class MetricProcessBase(cotyledon.Service): def __init__(self, worker_id, conf, interval_delay=0): super(MetricProcessBase, self).__init__(worker_id) @@ -93,8 +100,8 @@ self._shutdown_done = threading.Event() def _configure(self): - self.store = storage.get_driver(self.conf) - self.index = indexer.get_driver(self.conf) + self.store = retry_on_exception(storage.get_driver, self.conf) + self.index = retry_on_exception(indexer.get_driver, self.conf) self.index.connect() def run(self): @@ -157,17 +164,17 @@ def __init__(self, worker_id, conf, queue): super(MetricScheduler, self).__init__( worker_id, conf, conf.metricd.metric_processing_delay) - self._coord, self._my_id = utils.get_coordinator_and_start( - conf.storage.coordination_url) self.queue = queue self.previously_scheduled_metrics = set() self.workers = conf.metricd.workers self.block_index = 0 self.block_size_default = self.workers * self.TASKS_PER_WORKER self.block_size = self.block_size_default + self.block_synced = False self.periodic = None def set_block(self, event): + self.block_synced = False get_members_req = self._coord.get_members(self.GROUP_ID) try: members = sorted(get_members_req.get()) @@ -178,17 +185,25 @@ cap = msgpack.loads(req.get(), encoding='utf-8') max_workers = max(cap['workers'], self.workers) self.block_size = max_workers * self.TASKS_PER_WORKER + self.block_synced = True LOG.info('New set of agents detected. Now working on block: %s, ' 'with up to %s metrics', self.block_index, self.block_size) except Exception: - LOG.warning('Error getting block to work on, defaulting to first') + LOG.error('Error getting block to work on (%s), ' + 'defaulting to first', exc_info=True) self.block_index = 0 self.block_size = self.block_size_default - @utils.retry + @tenacity.retry( + wait=_wait_exponential, + # Never retry except when explicitly asked by raising TryAgain + retry=tenacity.retry_never) def _configure(self): super(MetricScheduler, self)._configure() + self._coord, self._my_id = retry_on_exception( + utils.get_coordinator_and_start, + self.conf.storage.coordination_url) try: cap = msgpack.dumps({'workers': self.workers}) join_req = self._coord.join_group(self.GROUP_ID, cap) @@ -198,7 +213,9 @@ @periodics.periodic(spacing=self.SYNC_RATE, run_immediately=True) def run_watchers(): - self._coord.run_watchers() + done = self._coord.run_watchers() + if not done and not self.block_synced: + self.set_block(None) self.periodic = periodics.PeriodicWorker.create([]) self.periodic.add(run_watchers) diff -Nru gnocchi-3.1.2/gnocchi/rest/__init__.py gnocchi-3.1.9/gnocchi/rest/__init__.py --- gnocchi-3.1.2/gnocchi/rest/__init__.py 2017-03-20 12:29:03.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/rest/__init__.py 2017-07-24 16:25:03.000000000 +0000 @@ -18,7 +18,6 @@ import itertools import uuid -from concurrent import futures import jsonpatch from oslo_utils import dictutils from oslo_utils import strutils @@ -163,8 +162,6 @@ METRIC_DEFAULT_PAGINATION = ['id:asc'] -THREADS = utils.get_default_workers() - def get_pagination_options(params, default): max_limit = pecan.request.conf.api.max_limit @@ -1422,12 +1419,10 @@ for metric in known_metrics: enforce("post measures", metric) - storage = pecan.request.storage.incoming - with futures.ThreadPoolExecutor(max_workers=THREADS) as executor: - list(executor.map(lambda x: storage.add_measures(*x), - ((metric, - body_by_rid[metric.resource_id][metric.name]) - for metric in known_metrics))) + pecan.request.storage.incoming.add_measures_batch( + dict((metric, + body_by_rid[metric.resource_id][metric.name]) + for metric in known_metrics)) pecan.response.status = 202 @@ -1456,11 +1451,9 @@ for metric in metrics: enforce("post measures", metric) - storage = pecan.request.storage.incoming - with futures.ThreadPoolExecutor(max_workers=THREADS) as executor: - list(executor.map(lambda x: storage.add_measures(*x), - ((metric, body[metric.id]) for metric in - metrics))) + pecan.request.storage.incoming.add_measures_batch( + dict((metric, body[metric.id]) for metric in + metrics)) pecan.response.status = 202 @@ -1623,9 +1616,9 @@ except storage.MetricUnaggregatable as e: abort(400, ("One of the metrics being aggregated doesn't have " "matching granularity: %s") % str(e)) - except storage.MetricDoesNotExist as e: - abort(404, e) - except storage.AggregationDoesNotExist as e: + except (storage.MetricDoesNotExist, + storage.GranularityDoesNotExist, + storage.AggregationDoesNotExist) as e: abort(404, e) @pecan.expose('json') diff -Nru gnocchi-3.1.2/gnocchi/service.py gnocchi-3.1.9/gnocchi/service.py --- gnocchi-3.1.2/gnocchi/service.py 2017-03-20 12:29:03.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/service.py 2017-07-24 16:25:03.000000000 +0000 @@ -48,17 +48,17 @@ conf.register_opts(list(options), group=None if group == "DEFAULT" else group) - # HACK(jd) I'm not happy about that, fix AP class to handle a conf object? - archive_policy.ArchivePolicy.DEFAULT_AGGREGATION_METHODS = ( - conf.archive_policy.default_aggregation_methods - ) - conf.set_default("workers", utils.get_default_workers(), group="metricd") conf(args, project='gnocchi', validate_default_values=True, default_config_files=default_config_files, version=pbr.version.VersionInfo('gnocchi').version_string()) + # HACK(jd) I'm not happy about that, fix AP class to handle a conf object? + archive_policy.ArchivePolicy.DEFAULT_AGGREGATION_METHODS = ( + conf.archive_policy.default_aggregation_methods + ) + # If no coordination URL is provided, default to using the indexer as # coordinator if conf.storage.coordination_url is None: diff -Nru gnocchi-3.1.2/gnocchi/storage/ceph.py gnocchi-3.1.9/gnocchi/storage/ceph.py --- gnocchi-3.1.2/gnocchi/storage/ceph.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/storage/ceph.py 2017-07-24 16:25:03.000000000 +0000 @@ -33,6 +33,9 @@ help='Ceph username (ie: admin without "client." prefix).'), cfg.StrOpt('ceph_secret', help='Ceph key', secret=True), cfg.StrOpt('ceph_keyring', help='Ceph keyring path.'), + cfg.StrOpt('ceph_timeout', + default="30", + help='Ceph connection timeout in seconds'), cfg.StrOpt('ceph_conffile', default='/etc/ceph/ceph.conf', help='Ceph configuration file.'), diff -Nru gnocchi-3.1.2/gnocchi/storage/common/ceph.py gnocchi-3.1.9/gnocchi/storage/common/ceph.py --- gnocchi-3.1.2/gnocchi/storage/common/ceph.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/storage/common/ceph.py 2017-07-24 16:25:03.000000000 +0000 @@ -36,6 +36,10 @@ options['keyring'] = conf.ceph_keyring if conf.ceph_secret: options['key'] = conf.ceph_secret + if conf.ceph_timeout: + options['rados_osd_op_timeout'] = conf.ceph_timeout + options['rados_mon_op_timeout'] = conf.ceph_timeout + options['client_mount_timeout'] = conf.ceph_timeout if not rados: raise ImportError("No module named 'rados' nor 'cradox'") diff -Nru gnocchi-3.1.2/gnocchi/storage/common/swift.py gnocchi-3.1.9/gnocchi/storage/common/swift.py --- gnocchi-3.1.2/gnocchi/storage/common/swift.py 2017-03-20 12:29:03.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/storage/common/swift.py 2017-07-24 16:25:03.000000000 +0000 @@ -31,6 +31,7 @@ def get_connection(conf): if swclient is None: raise RuntimeError("python-swiftclient unavailable") + return swclient.Connection( auth_version=conf.swift_auth_version, authurl=conf.swift_authurl, diff -Nru gnocchi-3.1.2/gnocchi/storage/incoming/_carbonara.py gnocchi-3.1.9/gnocchi/storage/incoming/_carbonara.py --- gnocchi-3.1.2/gnocchi/storage/incoming/_carbonara.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/storage/incoming/_carbonara.py 2017-07-24 16:25:03.000000000 +0000 @@ -14,18 +14,22 @@ # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. +from concurrent import futures import itertools import struct from oslo_log import log from oslo_serialization import msgpackutils import pandas -import six.moves +import six from gnocchi.storage import incoming +from gnocchi import utils LOG = log.getLogger(__name__) +_NUM_WORKERS = utils.get_default_workers() + class CarbonaraBasedStorage(incoming.StorageDriver): MEASURE_PREFIX = "measure" @@ -50,12 +54,19 @@ pandas.to_datetime(measures[::2], unit='ns'), itertools.islice(measures, 1, len(measures), 2)) - def add_measures(self, metric, measures): + def _encode_measures(self, measures): measures = list(measures) - data = struct.pack( + return struct.pack( "<" + self._MEASURE_SERIAL_FORMAT * len(measures), *list(itertools.chain.from_iterable(measures))) - self._store_new_measures(metric, data) + + def add_measures_batch(self, metrics_and_measures): + with futures.ThreadPoolExecutor(max_workers=_NUM_WORKERS) as executor: + list(executor.map( + lambda args: self._store_new_measures(*args), + ((metric, self._encode_measures(measures)) + for metric, measures + in six.iteritems(metrics_and_measures)))) @staticmethod def _store_new_measures(metric, data): diff -Nru gnocchi-3.1.2/gnocchi/storage/incoming/ceph.py gnocchi-3.1.9/gnocchi/storage/incoming/ceph.py --- gnocchi-3.1.2/gnocchi/storage/incoming/ceph.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/storage/incoming/ceph.py 2017-07-24 16:25:03.000000000 +0000 @@ -15,10 +15,10 @@ import contextlib import datetime import errno -import functools import itertools import uuid +import six from gnocchi.storage.common import ceph from gnocchi.storage.incoming import _carbonara @@ -72,23 +72,26 @@ for xattr in xattrs: self.ioctx.rm_xattr(self.MEASURE_PREFIX, xattr) - def _store_new_measures(self, metric, data): - # NOTE(sileht): list all objects in a pool is too slow with - # many objects (2min for 20000 objects in 50osds cluster), - # and enforce us to iterrate over all objects - # So we create an object MEASURE_PREFIX, that have as - # omap the list of objects to process (not xattr because - # it doesn't allow to configure the locking behavior) - name = "_".join(( - self.MEASURE_PREFIX, - str(metric.id), - str(uuid.uuid4()), - datetime.datetime.utcnow().strftime("%Y%m%d_%H:%M:%S"))) - - self.ioctx.write_full(name, data) + def add_measures_batch(self, metrics_and_measures): + names = [] + for metric, measures in six.iteritems(metrics_and_measures): + name = "_".join(( + self.MEASURE_PREFIX, + str(metric.id), + str(uuid.uuid4()), + datetime.datetime.utcnow().strftime("%Y%m%d_%H:%M:%S"))) + names.append(name) + data = self._encode_measures(measures) + self.ioctx.write_full(name, data) with rados.WriteOpCtx() as op: - self.ioctx.set_omap(op, (name,), (b"",)) + # NOTE(sileht): list all objects in a pool is too slow with + # many objects (2min for 20000 objects in 50osds cluster), + # and enforce us to iterrate over all objects + # So we create an object MEASURE_PREFIX, that have as + # omap the list of objects to process (not xattr because + # it doesn't # allow to configure the locking behavior) + self.ioctx.set_omap(op, tuple(names), (b"",) * len(names)) self.ioctx.operate_write_op(op, self.MEASURE_PREFIX, flags=self.OMAP_WRITE_FLAGS) @@ -165,35 +168,9 @@ object_names = list(self._list_object_names_to_process(object_prefix)) measures = [] - ops = [] - bufsize = 8192 # Same sa rados_read one - - tmp_measures = {} - - def add_to_measures(name, comp, data): - if name in tmp_measures: - tmp_measures[name] += data - else: - tmp_measures[name] = data - if len(data) < bufsize: - measures.extend(self._unserialize_measures(name, - tmp_measures[name])) - del tmp_measures[name] - else: - ops.append(self.ioctx.aio_read( - name, bufsize, len(tmp_measures[name]), - functools.partial(add_to_measures, name) - )) - - for name in object_names: - ops.append(self.ioctx.aio_read( - name, bufsize, 0, - functools.partial(add_to_measures, name) - )) - - while ops: - op = ops.pop() - op.wait_for_complete_and_cb() + for n in object_names: + data = self._get_object_content(n) + measures.extend(self._unserialize_measures(n, data)) yield measures @@ -207,3 +184,14 @@ for n in object_names: self.ioctx.aio_remove(n) + + def _get_object_content(self, name): + offset = 0 + content = b'' + while True: + data = self.ioctx.read(name, offset=offset) + if not data: + break + content += data + offset += len(data) + return content diff -Nru gnocchi-3.1.2/gnocchi/storage/incoming/__init__.py gnocchi-3.1.9/gnocchi/storage/incoming/__init__.py --- gnocchi-3.1.2/gnocchi/storage/incoming/__init__.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/storage/incoming/__init__.py 2017-07-24 16:25:03.000000000 +0000 @@ -1,5 +1,6 @@ # -*- encoding: utf-8 -*- # +# Copyright © 2017 Red Hat, Inc. # Copyright © 2014-2015 eNovance # # Licensed under the Apache License, Version 2.0 (the "License"); you may @@ -33,16 +34,23 @@ def upgrade(indexer): pass - @staticmethod - def add_measures(metric, measures): + def add_measures(self, metric, measures): """Add a measure to a metric. :param metric: The metric measured. :param measures: The actual measures. """ - raise exceptions.NotImplementedError + self.add_measures_batch({metric: measures}) @staticmethod + def add_measures_batch(metrics_and_measures): + """Add a batch of measures for some metrics. + + :param metrics_and_measures: A dict where keys + are metrics and value are measure. + """ + raise exceptions.NotImplementedError + def measures_report(details=True): """Return a report of pending to process measures. diff -Nru gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits/aggregation.yaml gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits/aggregation.yaml --- gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits/aggregation.yaml 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits/aggregation.yaml 2017-07-24 16:25:03.000000000 +0000 @@ -38,7 +38,7 @@ archive_policy_name: low status: 201 - - name: get metric list to push metric 1 + - name: get metric list GET: /v1/metric - name: push measurements to metric 1 @@ -171,6 +171,12 @@ GET: /v1/aggregation/metric?metric=$RESPONSE['$[0].id']&metric=$RESPONSE['$[1].id']&granularity=1&fill=asdf status: 400 + - name: get measure aggregates non existing granularity + desc: https://github.com/gnocchixyz/gnocchi/issues/148 + GET: /v1/aggregation/metric?metric=$HISTORY['get metric list'].$RESPONSE['$[0].id']&granularity=42 + status: 404 + response_strings: + - Granularity '42.0' for metric # Aggregation by resource and metric_name diff -Nru gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits/archive.yaml gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits/archive.yaml --- gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits/archive.yaml 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits/archive.yaml 2017-07-24 16:23:52.000000000 +0000 @@ -480,6 +480,18 @@ timespan: "1 shenanigan" status: 400 + - name: create policy when granularity is larger than timespan + POST: /v1/archive_policy + request_headers: + content-type: application/json + x-roles: admin + data: + name: should-have-failed + definition: + - granularity: 2 hour + timespan: 1 hour + status: 400 + # Non admin user attempt - name: fail to create policy non-admin diff -Nru gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits/resource-aggregation.yaml gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits/resource-aggregation.yaml --- gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits/resource-aggregation.yaml 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits/resource-aggregation.yaml 2017-07-24 16:25:03.000000000 +0000 @@ -43,6 +43,22 @@ value: 12 status: 202 + - name: get aggregation with no data + desc: https://github.com/gnocchixyz/gnocchi/issues/69 + POST: /v1/aggregation/resource/generic/metric/cpu.util?stop=2012-03-06T00:00:00&fill=0&granularity=300&resample=3600 + request_headers: + x-user-id: 6c865dd0-7945-4e08-8b27-d0d7f1c2b667 + x-project-id: c7f32f1f-c5ef-427a-8ecd-915b219c66e8 + content-type: application/json + data: + =: + id: 4ed9c196-4c9f-4ba8-a5be-c9a71a82aac4 + poll: + count: 10 + delay: 1 + response_json_paths: + $: [] + - name: create resource 2 POST: /v1/resource/generic request_headers: diff -Nru gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits-live/live.yaml gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits-live/live.yaml --- gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits-live/live.yaml 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits-live/live.yaml 2017-07-24 16:25:03.000000000 +0000 @@ -146,7 +146,7 @@ $.definition[2].points: 5 $.definition[2].timespan: "0:05:00" response_strings: - '"aggregation_methods": ["max", "min", "mean"]' + - '"aggregation_methods": ["max", "min", "mean"]' - name: get wrong accept desc: invalid 'accept' header @@ -330,7 +330,7 @@ GET: /v1/archive_policy_rule status: 200 response_strings: - '"metric_pattern": "live.*", "archive_policy_name": "gabbilive", "name": "gabbilive_rule"' + - '"metric_pattern": "live.*", "archive_policy_name": "gabbilive", "name": "gabbilive_rule"' - name: get unknown archive policy rule GET: /v1/archive_policy_rule/foo @@ -556,7 +556,7 @@ =: id: "2ae35573-7f9f-4bb1-aae8-dad8dff5706e" response_strings: - '"user_id": "126204ef-989a-46fd-999b-ee45c8108f31"' + - '"user_id": "126204ef-989a-46fd-999b-ee45c8108f31"' - name: search for myresource resource via user_id and project_id POST: /v1/search/resource/generic @@ -569,7 +569,7 @@ - =: project_id: "98e785d7-9487-4159-8ab8-8230ec37537a" response_strings: - '"id": "2ae35573-7f9f-4bb1-aae8-dad8dff5706e"' + - '"id": "2ae35573-7f9f-4bb1-aae8-dad8dff5706e"' - name: patch myresource resource PATCH: /v1/resource/myresource/2ae35573-7f9f-4bb1-aae8-dad8dff5706e diff -Nru gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits-live/search-resource.yaml gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits-live/search-resource.yaml --- gnocchi-3.1.2/gnocchi/tests/gabbi/gabbits-live/search-resource.yaml 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/tests/gabbi/gabbits-live/search-resource.yaml 2017-07-24 16:25:03.000000000 +0000 @@ -25,13 +25,13 @@ # Setup resource types if don't exist # - - name: create new resource type 'instance' + - name: create new resource type 'instance-like' POST: /v1/resource_type status: 201 request_headers: content-type: application/json data: - name: instance + name: instance-like attributes: display_name: type: string @@ -49,13 +49,13 @@ type: string required: False - - name: create new resource type 'image' + - name: create new resource type 'image-like' POST: /v1/resource_type status: 201 request_headers: content-type: application/json data: - name: image + name: image-like attributes: name: type: string @@ -70,8 +70,8 @@ # # Setup test resources # - - name: helper. create instance resource-1 - POST: /v1/resource/instance + - name: helper. create instance-like resource-1 + POST: /v1/resource/instance-like request_headers: content-type: application/json data: @@ -84,8 +84,8 @@ project_id: c9a5f184-c0d0-4daa-83c3-af6fdc0879e6 status: 201 - - name: helper. create instance resource-2 - POST: /v1/resource/instance + - name: helper. create instance-like resource-2 + POST: /v1/resource/instance-like request_headers: content-type: application/json data: @@ -98,8 +98,8 @@ project_id: c9a5f184-c0d0-4daa-83c3-af6fdc0879e6 status: 201 - - name: helper. create instance resource-3 - POST: /v1/resource/instance + - name: helper. create instance-like resource-3 + POST: /v1/resource/instance-like request_headers: content-type: application/json data: @@ -112,8 +112,8 @@ project_id: 40eba01c-b348-49b8-803f-67123251a00a status: 201 - - name: helper. create image resource-1 - POST: /v1/resource/image + - name: helper. create image-like resource-1 + POST: /v1/resource/image-like request_headers: content-type: application/json data: @@ -141,12 +141,12 @@ response_json_paths: $.`len`: 2 response_json_paths: - $.[0].type: instance - $.[1].type: image + $.[0].type: instance-like + $.[1].type: image-like $.[0].id: c442a47c-eb33-46ce-9665-f3aa0bef54e7 $.[1].id: 7ab2f7ae-7af5-4469-bdc8-3c0f6dfab75d - - name: search for all resources of instance type create by specific user_id + - name: search for all resources of instance-like type create by specific user_id desc: all instances created by a specified user POST: /v1/search/resource/generic request_headers: @@ -154,7 +154,7 @@ data: and: - =: - type: instance + type: instance-like - =: user_id: 33ba83ca-2f12-4ad6-8fa2-bc8b55d36e07 status: 200 @@ -166,8 +166,8 @@ response_json_paths: $.[0].id: a64ca14f-bc7c-45b0-aa85-42cd2179e1e2 $.[1].id: 7ccccfa0-92ce-4225-80ca-3ac9cb122d6a - $.[0].type: instance - $.[1].type: instance + $.[0].type: instance-like + $.[1].type: instance-like $.[0].metrics.`len`: 0 $.[1].metrics.`len`: 0 @@ -185,7 +185,7 @@ - name: search for intances on a specific compute using "like" keyword desc: search for vms hosted on a specific compute node - POST: /v1/search/resource/instance + POST: /v1/search/resource/instance-like request_headers: content-type: application/json data: @@ -203,7 +203,7 @@ - name: search for instances using complex search with "like" keyword and user_id desc: search for vms of specified user hosted on a specific compute node - POST: /v1/search/resource/instance + POST: /v1/search/resource/instance-like request_headers: content-type: application/json data: @@ -219,8 +219,8 @@ - '"display_name": "vm-gabbi-2"' - '"project_id": "c9a5f184-c0d0-4daa-83c3-af6fdc0879e6"' - - name: search for resources of instance or image type with specific user_id - desc: search for all image or instance resources created by a specific user + - name: search for resources of instance-like or image-like type with specific user_id + desc: search for all image-like or instance-like resources created by a specific user POST: /v1/search/resource/generic request_headers: content-type: application/json @@ -231,16 +231,16 @@ - or: - =: - type: instance + type: instance-like - =: - type: image + type: image-like status: 200 response_json_paths: $.`len`: 2 response_strings: - - '"type": "image"' - - '"type": "instance"' + - '"type": "image-like"' + - '"type": "instance-like"' - '"id": "7ab2f7ae-7af5-4469-bdc8-3c0f6dfab75d"' - '"id": "c442a47c-eb33-46ce-9665-f3aa0bef54e7"' @@ -248,27 +248,27 @@ # Tear down resources # - - name: helper. delete instance resource-1 - DELETE: /v1/resource/instance/a64ca14f-bc7c-45b0-aa85-42cd2179e1e2 + - name: helper. delete instance-like resource-1 + DELETE: /v1/resource/instance-like/a64ca14f-bc7c-45b0-aa85-42cd2179e1e2 status: 204 - - name: helper. delete instance resource-2 - DELETE: /v1/resource/instance/7ccccfa0-92ce-4225-80ca-3ac9cb122d6a + - name: helper. delete instance-like resource-2 + DELETE: /v1/resource/instance-like/7ccccfa0-92ce-4225-80ca-3ac9cb122d6a status: 204 - - name: helper. delete instance resource-3 - DELETE: /v1/resource/instance/c442a47c-eb33-46ce-9665-f3aa0bef54e7 + - name: helper. delete instance-like resource-3 + DELETE: /v1/resource/instance-like/c442a47c-eb33-46ce-9665-f3aa0bef54e7 status: 204 - - name: helper. delete image resource - DELETE: /v1/resource/image/7ab2f7ae-7af5-4469-bdc8-3c0f6dfab75d + - name: helper. delete image-like resource + DELETE: /v1/resource/image-like/7ab2f7ae-7af5-4469-bdc8-3c0f6dfab75d status: 204 - - name: helper. delete resource-type instance - DELETE: /v1/resource_type/instance + - name: helper. delete resource-type instance-like + DELETE: /v1/resource_type/instance-like status: 204 - - name: helper. delete resource-type image - DELETE: /v1/resource_type/image + - name: helper. delete resource-type image-like + DELETE: /v1/resource_type/image-like status: 204 diff -Nru gnocchi-3.1.2/gnocchi/tests/test_archive_policy.py gnocchi-3.1.9/gnocchi/tests/test_archive_policy.py --- gnocchi-3.1.2/gnocchi/tests/test_archive_policy.py 2017-03-20 12:29:03.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/tests/test_archive_policy.py 2017-07-24 16:23:52.000000000 +0000 @@ -96,3 +96,6 @@ self.assertRaises(ValueError, archive_policy.ArchivePolicyItem, 1, -1) + self.assertRaises(ValueError, + archive_policy.ArchivePolicyItem, + 2, None, 1) diff -Nru gnocchi-3.1.2/gnocchi/tests/test_storage.py gnocchi-3.1.9/gnocchi/tests/test_storage.py --- gnocchi-3.1.2/gnocchi/tests/test_storage.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/tests/test_storage.py 2017-07-24 16:25:03.000000000 +0000 @@ -913,6 +913,16 @@ (utils.datetime_utc(2014, 1, 1, 12, 0, 15), 5.0, 1.0), ], self.storage.get_measures(m)) + def test_resample_no_metric(self): + """https://github.com/gnocchixyz/gnocchi/issues/69""" + self.assertEqual([], + self.storage.get_measures( + self.metric, + utils.datetime_utc(2014, 1, 1), + utils.datetime_utc(2015, 1, 1), + granularity=300, + resample=3600)) + class TestMeasureQuery(base.BaseTestCase): def test_equal(self): diff -Nru gnocchi-3.1.2/gnocchi/tests/test_utils.py gnocchi-3.1.9/gnocchi/tests/test_utils.py --- gnocchi-3.1.2/gnocchi/tests/test_utils.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/tests/test_utils.py 2017-07-24 16:25:03.000000000 +0000 @@ -13,6 +13,7 @@ # under the License. import datetime import os +import uuid import iso8601 import mock @@ -57,3 +58,21 @@ utils.to_datetime(utils.to_timestamp(1425652440.4)), datetime.datetime(2015, 3, 6, 14, 34, 0, 400000, tzinfo=iso8601.iso8601.UTC)) + + +class TestResourceUUID(tests_base.TestCase): + def test_conversion(self): + self.assertEqual( + uuid.UUID('ba571521-1de6-5aff-b183-1535fd6eb5d0'), + utils.ResourceUUID( + uuid.UUID('ba571521-1de6-5aff-b183-1535fd6eb5d0'), + "bar")) + self.assertEqual( + uuid.UUID('ba571521-1de6-5aff-b183-1535fd6eb5d0'), + utils.ResourceUUID("foo", "bar")) + self.assertEqual( + uuid.UUID('4efb21f6-3d19-5fe3-910b-be8f0f727846'), + utils.ResourceUUID("foo", None)) + self.assertEqual( + uuid.UUID('853e5c64-f45e-58b2-999c-96df856fbe3d'), + utils.ResourceUUID("foo", "")) diff -Nru gnocchi-3.1.2/gnocchi/utils.py gnocchi-3.1.9/gnocchi/utils.py --- gnocchi-3.1.2/gnocchi/utils.py 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/gnocchi/utils.py 2017-07-24 16:25:03.000000000 +0000 @@ -27,7 +27,6 @@ from oslo_utils import timeutils import pandas as pd import six -import tenacity from tooz import coordination @@ -48,6 +47,8 @@ return uuid.UUID(value) except ValueError: if len(value) <= 255: + if creator is None: + creator = "\x00" # value/creator must be str (unicode) in Python 3 and str (bytes) # in Python 2. It's not logical, I know. if six.PY2: @@ -66,28 +67,10 @@ raise ValueError(e) -# Retry with exponential backoff for up to 1 minute -retry = tenacity.retry( - wait=tenacity.wait_exponential(multiplier=0.5, max=60), - # Never retry except when explicitly asked by raising TryAgain - retry=tenacity.retry_never, - reraise=True) - - -# TODO(jd) Move this to tooz? -@retry -def _enable_coordination(coord): - try: - coord.start(start_heart=True) - except Exception as e: - LOG.error("Unable to start coordinator: %s", e) - raise tenacity.TryAgain(e) - - def get_coordinator_and_start(url): my_id = str(uuid.uuid4()) coord = coordination.get_coordinator(url, my_id) - _enable_coordination(coord) + coord.start(start_heart=True) return coord, my_id diff -Nru gnocchi-3.1.2/gnocchi.egg-info/pbr.json gnocchi-3.1.9/gnocchi.egg-info/pbr.json --- gnocchi-3.1.2/gnocchi.egg-info/pbr.json 2017-03-20 12:31:21.000000000 +0000 +++ gnocchi-3.1.9/gnocchi.egg-info/pbr.json 2017-07-24 16:25:26.000000000 +0000 @@ -1 +1 @@ -{"git_version": "0f73c75", "is_release": true} \ No newline at end of file +{"git_version": "0e45d87f", "is_release": true} \ No newline at end of file diff -Nru gnocchi-3.1.2/gnocchi.egg-info/PKG-INFO gnocchi-3.1.9/gnocchi.egg-info/PKG-INFO --- gnocchi-3.1.2/gnocchi.egg-info/PKG-INFO 2017-03-20 12:31:21.000000000 +0000 +++ gnocchi-3.1.9/gnocchi.egg-info/PKG-INFO 2017-07-24 16:25:26.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.1 Name: gnocchi -Version: 3.1.2 +Version: 3.1.9 Summary: Metric as a Service Home-page: http://gnocchi.xyz Author: OpenStack diff -Nru gnocchi-3.1.2/gnocchi.egg-info/requires.txt gnocchi-3.1.9/gnocchi.egg-info/requires.txt --- gnocchi-3.1.2/gnocchi.egg-info/requires.txt 2017-03-20 12:31:21.000000000 +0000 +++ gnocchi-3.1.9/gnocchi.egg-info/requires.txt 2017-07-24 16:25:26.000000000 +0000 @@ -18,12 +18,14 @@ ujson voluptuous werkzeug -trollius tenacity>=3.1.0 WebOb>=1.4.1 Paste PasteDeploy +[:(python_version < '3.4')] +trollius + [ceph] msgpack-python lz4<0.9.0 @@ -37,7 +39,7 @@ [doc] oslosphinx>=2.2.0 -sphinx +sphinx<1.6.0 sphinxcontrib-httpdomain PyYAML Jinja2 @@ -79,7 +81,7 @@ tooz>=1.38 [test] -pifpaf>=0.12.0 +pifpaf[ceph,gnocchi]>=0.12.0 gabbi>=1.21.0 coverage>=3.6 fixtures diff -Nru gnocchi-3.1.2/gnocchi.egg-info/SOURCES.txt gnocchi-3.1.9/gnocchi.egg-info/SOURCES.txt --- gnocchi-3.1.2/gnocchi.egg-info/SOURCES.txt 2017-03-20 12:31:23.000000000 +0000 +++ gnocchi-3.1.9/gnocchi.egg-info/SOURCES.txt 2017-07-24 16:26:10.000000000 +0000 @@ -1,4 +1,5 @@ .testr.conf +.travis.yml AUTHORS ChangeLog LICENSE @@ -39,6 +40,7 @@ doc/source/releasenotes/3.0.rst doc/source/releasenotes/index.rst doc/source/releasenotes/unreleased.rst +etc/gnocchi/gnocchi.conf gnocchi/__init__.py gnocchi/archive_policy.py gnocchi/carbonara.py diff -Nru gnocchi-3.1.2/PKG-INFO gnocchi-3.1.9/PKG-INFO --- gnocchi-3.1.2/PKG-INFO 2017-03-20 12:31:23.000000000 +0000 +++ gnocchi-3.1.9/PKG-INFO 2017-07-24 16:26:11.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.1 Name: gnocchi -Version: 3.1.2 +Version: 3.1.9 Summary: Metric as a Service Home-page: http://gnocchi.xyz Author: OpenStack diff -Nru gnocchi-3.1.2/run-tests.sh gnocchi-3.1.9/run-tests.sh --- gnocchi-3.1.2/run-tests.sh 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/run-tests.sh 2017-07-24 16:25:03.000000000 +0000 @@ -1,5 +1,6 @@ #!/bin/bash -x set -e +PIDS="" GNOCCHI_TEST_STORAGE_DRIVERS=${GNOCCHI_TEST_STORAGE_DRIVERS:-file} GNOCCHI_TEST_INDEXER_DRIVERS=${GNOCCHI_TEST_INDEXER_DRIVERS:-postgresql} for storage in ${GNOCCHI_TEST_STORAGE_DRIVERS} @@ -7,6 +8,7 @@ export GNOCCHI_TEST_STORAGE_DRIVER=$storage for indexer in ${GNOCCHI_TEST_INDEXER_DRIVERS} do + { case $GNOCCHI_TEST_STORAGE_DRIVER in ceph) pifpaf run ceph -- pifpaf -g GNOCCHI_INDEXER_URL run $indexer -- ./tools/pretty_tox.sh $* @@ -27,5 +29,17 @@ pifpaf -g GNOCCHI_INDEXER_URL run $indexer -- ./tools/pretty_tox.sh $* ;; esac + # NOTE(sileht): Start all storage tests at once + } & + PIDS="$PIDS $!" done + # NOTE(sileht): Wait all storage tests, we tracks pid + # because wait without pid always return 0 + for pid in $PIDS; do + wait $pid + done + PIDS="" + # TODO(sileht): the output can be a mess with this + # Create a less verbose testrun output (with dot like nose ?) + # merge all subunit output and print it in after_script in travis done diff -Nru gnocchi-3.1.2/setup.cfg gnocchi-3.1.9/setup.cfg --- gnocchi-3.1.2/setup.cfg 2017-03-20 12:31:23.000000000 +0000 +++ gnocchi-3.1.9/setup.cfg 2017-07-24 16:26:11.000000000 +0000 @@ -59,13 +59,13 @@ tooz>=1.38 doc = oslosphinx>=2.2.0 - sphinx + sphinx<1.6.0 sphinxcontrib-httpdomain PyYAML Jinja2 reno>=1.6.2 test = - pifpaf>=0.12.0 + pifpaf[ceph,gnocchi]>=0.12.0 gabbi>=1.21.0 coverage>=3.6 fixtures diff -Nru gnocchi-3.1.2/tox.ini gnocchi-3.1.9/tox.ini --- gnocchi-3.1.2/tox.ini 2017-03-20 12:29:04.000000000 +0000 +++ gnocchi-3.1.9/tox.ini 2017-07-24 16:25:03.000000000 +0000 @@ -41,7 +41,7 @@ usedevelop = False setenv = GNOCCHI_VARIANT=test,postgresql,file deps = gnocchi[{env:GNOCCHI_VARIANT}]>=3.0,<3.1 - pifpaf>=0.13 + pifpaf[gnocchi]>=0.13 gnocchiclient>=2.8.0 commands = pifpaf --env-prefix INDEXER run postgresql {toxinidir}/run-upgrade-tests.sh {posargs} @@ -55,7 +55,7 @@ setenv = GNOCCHI_VARIANT=test,mysql,ceph,ceph_recommended_lib deps = gnocchi[{env:GNOCCHI_VARIANT}]>=3.0,<3.1 gnocchiclient>=2.8.0 - pifpaf>=0.13 + pifpaf[ceph,gnocchi]>=0.13 commands = pifpaf --env-prefix INDEXER run mysql -- pifpaf --env-prefix STORAGE run ceph {toxinidir}/run-upgrade-tests.sh {posargs} [testenv:py35-postgresql-file-upgrade-from-2.2] @@ -67,7 +67,7 @@ usedevelop = False setenv = GNOCCHI_VARIANT=test,postgresql,file deps = gnocchi[{env:GNOCCHI_VARIANT}]>=2.2,<2.3 - pifpaf>=0.13 + pifpaf[gnocchi]>=0.13 gnocchiclient>=2.8.0 commands = pifpaf --env-prefix INDEXER run postgresql {toxinidir}/run-upgrade-tests.sh {posargs} @@ -81,7 +81,7 @@ setenv = GNOCCHI_VARIANT=test,mysql,ceph,ceph_recommended_lib deps = gnocchi[{env:GNOCCHI_VARIANT}]>=2.2,<2.3 gnocchiclient>=2.8.0 - pifpaf>=0.13 + pifpaf[ceph,gnocchi]>=0.13 cradox # cradox is required because 2.2 extra names are incorrect commands = pifpaf --env-prefix INDEXER run mysql -- pifpaf --env-prefix STORAGE run ceph {toxinidir}/run-upgrade-tests.sh {posargs} diff -Nru gnocchi-3.1.2/.travis.yml gnocchi-3.1.9/.travis.yml --- gnocchi-3.1.2/.travis.yml 1970-01-01 00:00:00.000000000 +0000 +++ gnocchi-3.1.9/.travis.yml 2017-07-24 16:25:03.000000000 +0000 @@ -0,0 +1,47 @@ +language: python +sudo: required + +services: + - docker + +cache: + directories: + - ~/.cache/pip +env: + - TARGET: bashate + - TARGET: pep8 + - TARGET: docs + + - TARGET: py27-mysql-ceph-upgrade-from-2.2 + - TARGET: py35-postgresql-file-upgrade-from-2.2 + - TARGET: py27-mysql-ceph-upgrade-from-3.0 + - TARGET: py35-postgresql-file-upgrade-from-3.0 + + - TARGET: py27-mysql + - TARGET: py35-mysql + - TARGET: py27-postgresql + - TARGET: py35-postgresql + +before_script: +# Travis We need to fetch all tags/branches for documentation target + - case $TARGET in + docs*) + git config --get-all remote.origin.fetch; + git config --unset-all remote.origin.fetch; + git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/*; + git config --get-all remote.origin.fetch; + git fetch --unshallow --tags; + ;; + esac +install: + - docker pull gnocchixyz/ci-tools:latest +script: + - docker run -v ~/.cache/pip:/home/tester/.cache/pip -v $(pwd):/home/tester/src gnocchixyz/ci-tools:latest tox -e ${TARGET} + +notifications: + email: false + irc: + on_success: change + on_failure: always + channels: + - "irc.freenode.org#gnocchi"