diff -Nru heat-11.0.0~b1/AUTHORS heat-11.0.0~b2/AUTHORS --- heat-11.0.0~b1/AUTHORS 2018-04-19 19:39:43.000000000 +0000 +++ heat-11.0.0~b2/AUTHORS 2018-06-07 22:15:39.000000000 +0000 @@ -196,6 +196,7 @@ Joshua Harlow JuPing Juan Antonio Osorio Robles +Julia Kreger Julia Varlamova Julian Sy Julien Danjou @@ -440,6 +441,7 @@ guohliu hgangwx hmonika +huangshan huangtianhua igor ishant diff -Nru heat-11.0.0~b1/ChangeLog heat-11.0.0~b2/ChangeLog --- heat-11.0.0~b1/ChangeLog 2018-04-19 19:39:42.000000000 +0000 +++ heat-11.0.0~b2/ChangeLog 2018-06-07 22:15:38.000000000 +0000 @@ -1,12 +1,59 @@ CHANGES ======= +11.0.0.0b2 +---------- + +* Switch to neutron-\* service names +* Stop testing neutron-lbaas in gate jobs +* Remove mox from openstack\_v1/test\_stacks +* Handle new oslo.service release +* Delete internal ports for ERROR-ed nodes +* Reset resource replaced\_by field for rollback +* Download octavia image in tests +* Keep old files in file map for rolling update +* Don't allow nested or stacks in FAILED state to be migrated +* Update http links for doc migration +* Fix gerrit tool query statement +* Add retry for resource\_purge\_deleted call +* Change non-apache job to non-voting +* Sync support network type for provider network +* Fix debug logs in \_stale\_resource\_needs\_retry() +* Retry on deadlock in purge\_deleted +* Remove mox usage from \`aws/test\_security\_group.py\` +* Remove mox usage from test\_stack +* Remove mox usage from \`aws/test\_network\_interface.py\` +* Remove mox usage from \`aws/test\_instance\_network.py\` +* Remove mox usage from \`aws/test\_waitcondition.py\` +* Remove obsolete identity-v3-only job +* Add Rocky versions +* Make resource requires a set instead of a list +* Create replacement resources with correct requires +* Calculate the requires list in check\_resource +* Don't pass input\_data to Resource.delete\_convergence() +* Avoid double-write when updating FAILED rsrc with no change +* Retry resource check if atomic key incremented +* Do deepcopy when copying templates +* Remove mox usage from test\_stack\_resources and tools +* Remove mox usage from test\_software\_config +* Remove mox usage from test\_engine\_service +* Fix nova fakes for server listing + 11.0.0.0b1 ---------- +* Log traversal ID when beginning +* Remove install-requirements script +* Remove mox usage from test\_resource +* Docs: modernise links +* Docs: Fix broken external links +* Docs: use local references for internal links * Docs: Reorganise landing page +* Fix lower-constraints * Remove mox usage from test\_api\_ec2token +* Increment resource atomic key when storing attributes * Fixing Senlin incompatibility with openstacksdk 0.11.x +* Fix indentation in hot\_spec.rst * Fixing unicode issue when to\_dict is called on py2.7 env * Remove mox from test\_sqlalchemy\_api * Remove mox from test\_provider\_template @@ -17,16 +64,20 @@ * Remove mox from test\_event * Remove mox from test\_stack\_user * Remove mox from test\_urlfetch +* Remove mox from test\_neutron\_loadbalancer 4/4 - all rest tests * Remove mox from test\_scheduler.WrapperTaskTest * Remove mox from test\_scheduler.TaskTest * Remove mox from test\_neutron\_loadbalancer 3/4 - PoolMemberTest * Remove mox from test\_neutron\_loadbalancer 2/4 - PoolTest +* Remove mox usage from test\_vpc (part 1) * remove mox usage from test\_subscription * remove mox usage from test\_queue * Remove mox usage from test\_server\_tags +* Remove mox usage from test\_docker\_container * Fix broken test in DockerContainerTest * Imported Translations from Zanata * Remove mox from openstack\_v1/test\_events +* Remove mox from test\_instance * Updated from global requirements * Add MicroversionMixin for microversion support * Remove mox from openstack\_v1/test\_resources @@ -42,10 +93,14 @@ * Remove mox from test\_server * Generate user passwords with special characters * Fix entropy problems with OS::Random::String +* Configure hidden tag for tempest test * Remove mox usage from test\_nokey * Remove mox from test\_neutron\_metering * Updated from global requirements +* Remove mox from test\_neutron\_security\_group * Persist external resources on update +* Docs: Make stack domain users docs visible +* Remove mox from test-neutron-firewall * Create doc/requirements.txt * Updated from global requirements * Remove mox from test\_neutron\_network\_gateway @@ -61,6 +116,7 @@ * Remove mox from test\_neutron\_vpnservice * Remove mox usage from test\_scaling\_group * Remove mox usage from test\_heat\_autoscaling\_policy +* Correct behaviour of update\_replace property in test resource * Remove mox from test\_extraroute * Remove mox from test\_neutron\_floating\_ip * Remove mox from test\_waitcondition @@ -88,6 +144,7 @@ * Unit tests: Fix broken Monasca client test * Pass mistral execution argument by name * Imported Translations from Zanata +* Ignore dns domain NotFound when deleting record * Imported Translations from Zanata * Update reno for stable/queens * Always use string ID for WaitConditionHandle signals @@ -119,6 +176,8 @@ * Updated from global requirements * Cleanup remaning doc for CloudWatch API * zun: add property mount to container +* Log useful information in the API log +* Move context middleware earlier in pipeline * Replace random with SystemRandom for RandomString * Fix for None base\_url for Monasca client * Imported Translations from Zanata @@ -298,6 +357,7 @@ ---------- * Check for existing interfaces before adding implicit one +* Use stack\_id of None for service timer * Replace pycrypto with cryptography * Do not validate property network of sahara cluster * Imported Translations from Zanata @@ -346,6 +406,8 @@ * Updated from global requirements * Fix unit tests with oslo\_messaging 5.32.0 * Add default configuration files to data\_files +* Add catch-all for property errors in implicit dependencies +* Ignore property errors in implicit dependencies * Refactor FloatingIP add\_dependencies() method * Update incorrect timezone description * Updated from global requirements @@ -442,6 +504,7 @@ * Move install guides to doc/source/install * Support tenacity exponential backoff retry on resource sync * Don't get resource twice in resource\_signal() +* Trivial:remove unused import and add reasonable path in import * Neutron resources observe reality implementation * Updated from global requirements * Fix no-change updates of failed resources with restricted actions diff -Nru heat-11.0.0~b1/contrib/heat_docker/heat_docker/tests/test_docker_container.py heat-11.0.0~b2/contrib/heat_docker/heat_docker/tests/test_docker_container.py --- heat-11.0.0~b1/contrib/heat_docker/heat_docker/tests/test_docker_container.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/contrib/heat_docker/heat_docker/tests/test_docker_container.py 2018-06-07 22:12:28.000000000 +0000 @@ -59,7 +59,6 @@ super(DockerContainerTest, self).setUp() for res_name, res_class in docker_container.resource_mapping().items(): resource._register_class(res_name, res_class) - self.addCleanup(self.m.VerifyAll) def create_container(self, resource_name): t = template_format.parse(template) @@ -68,11 +67,9 @@ resource_name, self.stack.t.resource_definitions(self.stack)[resource_name], self.stack) - self.m.StubOutWithMock(resource, 'get_client') - resource.get_client().MultipleTimes().AndReturn( - docker.Client()) + self.patchobject(resource, 'get_client', + return_value=docker.Client()) self.assertIsNone(resource.validate()) - self.m.ReplayAll() scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) @@ -99,11 +96,9 @@ props['name'] = 'super-blog' resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) - self.m.StubOutWithMock(resource, 'get_client') - resource.get_client().MultipleTimes().AndReturn( - docker.Client()) + self.patchobject(resource, 'get_client', + return_value=docker.Client()) self.assertIsNone(resource.validate()) - self.m.ReplayAll() scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) @@ -142,11 +137,9 @@ props['links'] = {'db': 'mysql'} resource = docker_container.DockerContainer( 'Blog', definition.freeze(properties=props), self.stack) - self.m.StubOutWithMock(resource, 'get_client') - resource.get_client().MultipleTimes().AndReturn( - docker.Client()) + self.patchobject(resource, 'get_client', + return_value=docker.Client()) self.assertIsNone(resource.validate()) - self.m.ReplayAll() scheduler.TaskRunner(resource.create)() self.assertEqual((resource.CREATE, resource.COMPLETE), resource.state) @@ -188,7 +181,6 @@ raise self.assertIs(False, exists) - self.m.VerifyAll() @testtools.skipIf(docker is None, 'docker-py not available') def test_resource_delete_exception(self): @@ -197,18 +189,18 @@ response.content = 'some content' container = self.create_container('Blog') - self.m.StubOutWithMock(container.get_client(), 'kill') - container.get_client().kill(container.resource_id).AndRaise( - docker.errors.APIError('Not found', response)) - - self.m.StubOutWithMock(container, '_get_container_status') - container._get_container_status(container.resource_id).AndRaise( - docker.errors.APIError('Not found', response)) - - self.m.ReplayAll() - + self.patchobject(container.get_client(), 'kill', + side_effect=[docker.errors.APIError( + 'Not found', response)]) + + self.patchobject(container, '_get_container_status', + side_effect=[docker.errors.APIError( + 'Not found', response)]) scheduler.TaskRunner(container.delete)() - self.m.VerifyAll() + container.get_client().kill.assert_called_once_with( + container.resource_id) + container._get_container_status.assert_called_once_with( + container.resource_id) def test_resource_suspend_resume(self): container = self.create_container('Blog') diff -Nru heat-11.0.0~b1/contrib/heat_docker/setup.cfg heat-11.0.0~b2/contrib/heat_docker/setup.cfg --- heat-11.0.0~b1/contrib/heat_docker/setup.cfg 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/contrib/heat_docker/setup.cfg 2018-06-07 22:12:28.000000000 +0000 @@ -5,7 +5,7 @@ README.md author = OpenStack author-email = openstack-dev@lists.openstack.org -home-page = http://docs.openstack.org/developer/heat/ +home-page = https://docs.openstack.org/heat/latest/ classifier = Environment :: OpenStack Intended Audience :: Information Technology diff -Nru heat-11.0.0~b1/CONTRIBUTING.rst heat-11.0.0~b2/CONTRIBUTING.rst --- heat-11.0.0~b1/CONTRIBUTING.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/CONTRIBUTING.rst 2018-06-07 22:12:28.000000000 +0000 @@ -1,16 +1,16 @@ If you would like to contribute to the development of OpenStack, you must follow the steps in this page: - http://docs.openstack.org/infra/manual/developers.html + https://docs.openstack.org/infra/manual/developers.html Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at: - http://docs.openstack.org/infra/manual/developers.html#development-workflow + https://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. -Bugs should be filed on Launchpad, not GitHub: +Bugs should be filed on OpenStack Storyboard, not GitHub: - https://bugs.launchpad.net/heat + https://storyboard.openstack.org/#!/project/989 diff -Nru heat-11.0.0~b1/debian/changelog heat-11.0.0~b2/debian/changelog --- heat-11.0.0~b1/debian/changelog 2018-05-16 18:13:47.000000000 +0000 +++ heat-11.0.0~b2/debian/changelog 2018-06-13 18:07:13.000000000 +0000 @@ -1,3 +1,9 @@ +heat (1:11.0.0~b2-0ubuntu1) cosmic; urgency=medium + + * New upstream milestone for OpenStack Rocky. + + -- Corey Bryant Wed, 13 Jun 2018 14:07:13 -0400 + heat (1:11.0.0~b1-0ubuntu1) cosmic; urgency=medium * d/watch: Scope to 11.x series. diff -Nru heat-11.0.0~b1/doc/source/admin/index.rst heat-11.0.0~b2/doc/source/admin/index.rst --- heat-11.0.0~b1/doc/source/admin/index.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/admin/index.rst 2018-06-07 22:12:28.000000000 +0000 @@ -5,6 +5,6 @@ .. toctree:: :maxdepth: 2 - introduction.rst - auth-model.rst - stack-domain.rst + introduction + auth-model + stack-domain-users diff -Nru heat-11.0.0~b1/doc/source/admin/introduction.rst heat-11.0.0~b2/doc/source/admin/introduction.rst --- heat-11.0.0~b1/doc/source/admin/introduction.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/admin/introduction.rst 2018-06-07 22:12:28.000000000 +0000 @@ -28,5 +28,5 @@ a web interface. For more information about using the Orchestration service through the -command line, see the `OpenStack Command-Line Interface Reference -`_. +command line, see the `Heat Command-Line Interface reference +`_. diff -Nru heat-11.0.0~b1/doc/source/contributing/blueprints.rst heat-11.0.0~b2/doc/source/contributing/blueprints.rst --- heat-11.0.0~b1/doc/source/contributing/blueprints.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/contributing/blueprints.rst 2018-06-07 22:12:28.000000000 +0000 @@ -2,13 +2,13 @@ ==================== The Heat team uses the `heat-specs -`_ repository for its +`_ repository for its specification reviews. Detailed information can be found `here `_. Please note that we use a template for spec submissions. Please use the `template for the latest release -`_. +`_. It is not required to fill out all sections in the template. Spec Notes diff -Nru heat-11.0.0~b1/doc/source/developing_guides/architecture.rst heat-11.0.0~b2/doc/source/developing_guides/architecture.rst --- heat-11.0.0~b1/doc/source/developing_guides/architecture.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/developing_guides/architecture.rst 2018-06-07 22:12:28.000000000 +0000 @@ -92,5 +92,5 @@ The templates integrate well with Puppet_ and Chef_. .. _Puppet: https://s3.amazonaws.com/cloudformation-examples/IntegratingAWSCloudFormationWithPuppet.pdf -.. _Chef: http://www.full360.com/2011/02/27/integrating-aws-cloudformation-and-chef.html -.. _`AWS CloudFormation`: http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/Welcome.html?r=7078 +.. _Chef: https://www.full360.com/2011/02/27/integrating-aws-cloudformation-and-chef.html +.. _`AWS CloudFormation`: https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/Welcome.html?r=7078 diff -Nru heat-11.0.0~b1/doc/source/developing_guides/gmr.rst heat-11.0.0~b2/doc/source/developing_guides/gmr.rst --- heat-11.0.0~b1/doc/source/developing_guides/gmr.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/developing_guides/gmr.rst 2018-06-07 22:12:28.000000000 +0000 @@ -90,5 +90,4 @@ As mentioned above, additional sections can be added to the GMR for a particular executable. For more information, see the documentation about -``oslo.reports``: -`oslo.reports `_ +`oslo.reports `_ diff -Nru heat-11.0.0~b1/doc/source/developing_guides/pluginguide.rst heat-11.0.0~b2/doc/source/developing_guides/pluginguide.rst --- heat-11.0.0~b1/doc/source/developing_guides/pluginguide.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/developing_guides/pluginguide.rst 2018-06-07 22:12:28.000000000 +0000 @@ -669,8 +669,8 @@ Simply place the file containing your resource in one of these directories and the engine will make them available next time the service starts. -See one of the Installation Guides at http://docs.OpenStack.org/ for -more information on configuring the orchestration service. +See :doc:`<../configuration/index>` for more information on configuring the +orchestration service. Testing ------- diff -Nru heat-11.0.0~b1/doc/source/ext/resources.py heat-11.0.0~b2/doc/source/ext/resources.py --- heat-11.0.0~b1/doc/source/ext/resources.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/ext/resources.py 2018-06-07 22:12:28.000000000 +0000 @@ -36,7 +36,8 @@ '7.0.0': 'Newton', '8.0.0': 'Ocata', '9.0.0': 'Pike', - '10.0.0': 'Queens'} + '10.0.0': 'Queens', + '11.0.0': 'Rocky'} all_resources = {} diff -Nru heat-11.0.0~b1/doc/source/_extra/.htaccess heat-11.0.0~b2/doc/source/_extra/.htaccess --- heat-11.0.0~b1/doc/source/_extra/.htaccess 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/_extra/.htaccess 2018-06-07 22:12:28.000000000 +0000 @@ -1,3 +1,6 @@ # The top-level docs project will redirect URLs from the old /developer docs # to their equivalent pages on the new docs.openstack.org only if this file # exists. + +redirectmatch 301 ^/heat/([^/]+)/(architecture|pluginguide|schedulerhints|gmr|supportstatus)\.html$ /heat/$1/developing_guides/$2.html +redirectmatch 301 ^/heat/([^/]+)/(scale_deployment)\.html$ /heat/$1/operating_guides/$2.html diff -Nru heat-11.0.0~b1/doc/source/getting_started/create_a_stack.rst heat-11.0.0~b2/doc/source/getting_started/create_a_stack.rst --- heat-11.0.0~b1/doc/source/getting_started/create_a_stack.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/getting_started/create_a_stack.rst 2018-06-07 22:12:28.000000000 +0000 @@ -105,9 +105,8 @@ $ openstack stack delete teststack $ openstack stack list -You can explore other heat commands by referring to the -`Heat chapter -`_ -of the `OpenStack Command-Line Interface Reference -`_ then read -the :ref:`template-guide` and start authoring your own templates. +You can explore other heat commands by referring to the `Heat command reference +`_ for the +`OpenStack Command-Line Interface +`_; then read the +:ref:`template-guide` and start authoring your own templates. diff -Nru heat-11.0.0~b1/doc/source/getting_started/on_devstack.rst heat-11.0.0~b2/doc/source/getting_started/on_devstack.rst --- heat-11.0.0~b1/doc/source/getting_started/on_devstack.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/getting_started/on_devstack.rst 2018-06-07 22:12:28.000000000 +0000 @@ -50,7 +50,7 @@ a VM image that heat can launch. To do that add the following to `[[local|localrc]]` section of `local.conf`:: - IMAGE_URL_SITE="http://download.fedoraproject.org" + IMAGE_URL_SITE="https://download.fedoraproject.org" IMAGE_URL_PATH="/pub/fedora/linux/releases/25/CloudImages/x86_64/images/" IMAGE_URL_FILE="Fedora-Cloud-Base-25-1.3.x86_64.qcow2" IMAGE_URLS+=","$IMAGE_URL_SITE$IMAGE_URL_PATH$IMAGE_URL_FILE diff -Nru heat-11.0.0~b1/doc/source/getting_started/on_fedora.rst heat-11.0.0~b2/doc/source/getting_started/on_fedora.rst --- heat-11.0.0~b1/doc/source/getting_started/on_fedora.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/getting_started/on_fedora.rst 2018-06-07 22:12:28.000000000 +0000 @@ -14,13 +14,13 @@ Installing OpenStack and Heat on RHEL/Fedora/CentOS --------------------------------------------------- -Go to the `OpenStack Documentation `_ for -the latest version of the Installation Guide for Red Hat Enterprise -Linux, CentOS and Fedora which includes a chapter on installing the -Orchestration module (Heat). +Go to the `OpenStack Documentation +`_ for the latest version of the +Installation Guide for Red Hat Enterprise Linux, CentOS and Fedora which +includes a chapter on installing the Orchestration module (Heat). There are instructions for `installing the RDO OpenStack -`_ on Fedora and CentOS. +`_ on Fedora and CentOS. If installing with packstack, you can install heat by specifying ``--os-heat-install=y`` in your packstack invocation, or setting diff -Nru heat-11.0.0~b1/doc/source/getting_started/on_other.rst heat-11.0.0~b2/doc/source/getting_started/on_other.rst --- heat-11.0.0~b1/doc/source/getting_started/on_other.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/getting_started/on_other.rst 2018-06-07 22:12:28.000000000 +0000 @@ -15,8 +15,6 @@ =========================================== - There is a `Debian packaging team for OpenStack`_. -- There are instructions for `installing OpenStack on Ubuntu`_. - Various other distributions may have packaging teams or getting started guides available. .. _Debian packaging team for OpenStack: http://wiki.openstack.org/Packaging/Debian -.. _installing OpenStack on Ubuntu: http://docs.openstack.org/bexar/openstack-compute/admin/content/ch03s02.html diff -Nru heat-11.0.0~b1/doc/source/getting_started/on_ubuntu.rst heat-11.0.0~b2/doc/source/getting_started/on_ubuntu.rst --- heat-11.0.0~b1/doc/source/getting_started/on_ubuntu.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/getting_started/on_ubuntu.rst 2018-06-07 22:12:28.000000000 +0000 @@ -16,8 +16,9 @@ Heat is packaged for Debian, and Ubuntu (from 13.10) -Go to the `OpenStack Documentation `_ for -the latest version of the Installation Guide for Ubuntu which includes a -chapter on installing the Orchestration module (Heat). +Go to the `OpenStack Documentation +`_ for the latest version of the +Installation Guide for Ubuntu which includes a chapter on installing the +Orchestration module (Heat). There is a `Juju Charm for Heat ` available. diff -Nru heat-11.0.0~b1/doc/source/glossary.rst heat-11.0.0~b2/doc/source/glossary.rst --- heat-11.0.0~b1/doc/source/glossary.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/glossary.rst 2018-06-07 22:12:28.000000000 +0000 @@ -60,7 +60,7 @@ retrieve instance-specific data. See `Metadata service (OpenStack Administrator Guide)`_. - .. _Metadata service (OpenStack Administrator Guide): http://docs.openstack.org/admin-guide/compute-networking-nova.html#metadata-service + .. _Metadata service (OpenStack Administrator Guide): https://docs.openstack.org/nova/latest/admin/networking-nova.html#metadata-service Multi-region A feature of Heat that supports deployment to multiple regions. @@ -76,9 +76,9 @@ Nova Instance metadata User-provided *key:value* pairs associated with a Compute - Instance. See `Instance specific data (OpenStack Operations Guide)`_. + Instance. See `Instance-specific data (OpenStack Operations Guide)`_. - .. _Instance specific data (OpenStack Operations Guide): http://docs.openstack.org/openstack-ops/content/instances.html#instance_specific_data + .. _Instance-specific data (OpenStack Operations Guide): https://wiki.openstack.org/wiki/OpsGuide/User-Facing_Operations#using-instance-specific-data OpenStack Open source software for building private and public clouds. @@ -99,10 +99,7 @@ Provider resource A :term:`resource` implemented by a :term:`provider template`. The parent resource's properties become the - :term:`nested stack's ` parameters. See `What are - "Providers"? (OpenStack Wiki)`_. - - .. _`What are "Providers"? (OpenStack Wiki)`: https://wiki.openstack.org/wiki/Heat/Providers#What_are_.22Providers.22.3F + :term:`nested stack's ` parameters. Provider template Allows user-definable :term:`resource providers `_ -to create and manage cloud resources. +The Orchestration service (heat) uses a :ref:`Heat Orchestration Template (HOT) +` to create and manage cloud resources. This chapter assumes a working setup of OpenStack following the -`OpenStack Installation Tutorial `_. - +`OpenStack Installation Tutorial `_. diff -Nru heat-11.0.0~b1/doc/source/install/launch-instance.rst heat-11.0.0~b2/doc/source/install/launch-instance.rst --- heat-11.0.0~b1/doc/source/install/launch-instance.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/install/launch-instance.rst 2018-06-07 22:12:28.000000000 +0000 @@ -10,10 +10,7 @@ ----------------- The Orchestration service uses templates to describe stacks. -To learn about the template language, see `the Template Guide -`__ -in the `Heat developer documentation -`__. +To learn about the template language, see the :ref:`template-guide`. * Create the ``demo-template.yml`` file with the following content: diff -Nru heat-11.0.0~b1/doc/source/install/next-steps.rst heat-11.0.0~b2/doc/source/install/next-steps.rst --- heat-11.0.0~b1/doc/source/install/next-steps.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/install/next-steps.rst 2018-06-07 22:12:28.000000000 +0000 @@ -6,7 +6,7 @@ Your OpenStack environment now includes the heat service. To add more services, see the -`additional documentation on installing OpenStack `_ . +`additional documentation on installing OpenStack `_. -To learn more about the heat service, read the `Heat developer documentation -`__. +To learn more about the heat service, read the :doc:`Heat documentation +<../index>`. diff -Nru heat-11.0.0~b1/doc/source/operating_guides/httpd.rst heat-11.0.0~b2/doc/source/operating_guides/httpd.rst --- heat-11.0.0~b1/doc/source/operating_guides/httpd.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/operating_guides/httpd.rst 2018-06-07 22:12:28.000000000 +0000 @@ -98,5 +98,5 @@ The dsvm jobs in heat upstream gate uses this deployment method. -For more details on using mod_proxy_uwsgi see the official docs: -http://uwsgi-docs.readthedocs.io/en/latest/Apache.html?highlight=mod_uwsgi_proxy#mod-proxy-uwsgi +For more details on using mod_proxy_uwsgi see the `official docs +`_. diff -Nru heat-11.0.0~b1/doc/source/operating_guides/scale_deployment.rst heat-11.0.0~b2/doc/source/operating_guides/scale_deployment.rst --- heat-11.0.0~b1/doc/source/operating_guides/scale_deployment.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/operating_guides/scale_deployment.rst 2018-06-07 22:12:28.000000000 +0000 @@ -32,11 +32,10 @@ This guide, using a devstack installation of OpenStack, assumes that: 1. You have configured devstack from `Single Machine Installation Guide - `_; - 2. You have set up heat on devstack, as defined at `heat and DevStack - `_; - 3. You have installed `HAProxy `_ on the devstack + `_; + 2. You have set up heat on devstack, as defined at :doc:`heat and DevStack + <../getting_started/on_devstack>`; + 3. You have installed HAProxy_ on the devstack server. Architecture @@ -48,11 +47,10 @@ Basic Architecture ------------------ -The heat architecture is as defined at `heat architecture -`_ and shown in the -diagram below, where we have a CLI that sends HTTP requests to the REST and CFN -APIs, which in turn make calls using AMQP to the heat-engine. -:: +The heat architecture is as defined at :doc:`heat architecture +<../developing_guides/architecture>` and shown in the diagram below, +where we have a CLI that sends HTTP requests to the REST and CFN APIs, which in +turn make calls using AMQP to the heat-engine:: |- [REST API] -| [CLI] -- -- -- -- [ENGINE] @@ -65,7 +63,7 @@ and the CLI, a proxy has to be deployed. Because the heat CLI and APIs communicate by exchanging HTTP requests and -responses, a `HAProxy `_ HTTP load balancer server will +responses, a HAProxy_ HTTP load balancer server will be deployed between them. This way, the proxy will take the CLIs requests to the APIs and act on their @@ -354,3 +352,5 @@ option httpchk server cfn-server-1 10.0.0.2:8000 server cfn-server-2 10.0.0.3:8000 + +.. _HAProxy: https://www.haproxy.org/ diff -Nru heat-11.0.0~b1/doc/source/template_guide/basic_resources.rst heat-11.0.0~b2/doc/source/template_guide/basic_resources.rst --- heat-11.0.0~b1/doc/source/template_guide/basic_resources.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/template_guide/basic_resources.rst 2018-06-07 22:12:28.000000000 +0000 @@ -250,8 +250,9 @@ key_name: my_key .. note:: - For more information about key pairs, see - `Configure access and security for instances `_. + For more information about key pairs, see `Configure access and security for + instances + `_. Create a key pair ----------------- diff -Nru heat-11.0.0~b1/doc/source/template_guide/existing_templates.rst heat-11.0.0~b2/doc/source/template_guide/existing_templates.rst --- heat-11.0.0~b1/doc/source/template_guide/existing_templates.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/template_guide/existing_templates.rst 2018-06-07 22:12:28.000000000 +0000 @@ -25,4 +25,4 @@ development. Heat templates for deployment of Magento, Hadoop, MongoDB, ELK, Drupal and more can be found here. -.. _RCB Ops repository: http://github.com/rcbops/ +.. _RCB Ops repository: https://github.com/rcbops?q=RPC-Heat diff -Nru heat-11.0.0~b1/doc/source/template_guide/hello_world.rst heat-11.0.0~b2/doc/source/template_guide/hello_world.rst --- heat-11.0.0~b1/doc/source/template_guide/hello_world.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/template_guide/hello_world.rst 2018-06-07 22:12:28.000000000 +0000 @@ -51,10 +51,10 @@ image: ubuntu-trusty-x86_64 flavor: m1.small -Each HOT template must include the ``heat_template_version`` key with -the HOT version value, for example, ``2013-05-23``. A list of HOT template -versions can be found at `Heat Template Version -file `__ +Each HOT template must include the ``heat_template_version`` key with the HOT +version value, for example, ``2013-05-23``. Consult the :ref:`Heat template +version list ` for allowed values and their +features. The ``description`` key is optional, however it is good practice to include some useful text that describes what users can do with the template. diff -Nru heat-11.0.0~b1/doc/source/template_guide/hot_spec.rst heat-11.0.0~b2/doc/source/template_guide/hot_spec.rst --- heat-11.0.0~b1/doc/source/template_guide/hot_spec.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/template_guide/hot_spec.rst 2018-06-07 22:12:28.000000000 +0000 @@ -115,228 +115,266 @@ 2013-05-23 ---------- - The key with value ``2013-05-23`` indicates that the YAML document is a HOT - template and it may contain features implemented until the Icehouse - release. This version supports the following functions (some are back - ported to this version):: - - get_attr - get_file - get_param - get_resource - list_join - resource_facade - str_replace - Fn::Base64 - Fn::GetAZs - Fn::Join - Fn::MemberListToMap - Fn::Replace - Fn::ResourceFacade - Fn::Select - Fn::Split - Ref +The key with value ``2013-05-23`` indicates that the YAML document is a HOT +template and it may contain features implemented until the Icehouse +release. This version supports the following functions (some are back +ported to this version):: + + get_attr + get_file + get_param + get_resource + list_join + resource_facade + str_replace + Fn::Base64 + Fn::GetAZs + Fn::Join + Fn::MemberListToMap + Fn::Replace + Fn::ResourceFacade + Fn::Select + Fn::Split + Ref 2014-10-16 ---------- - The key with value ``2014-10-16`` indicates that the YAML document is a HOT - template and it may contain features added and/or removed up until the Juno - release. This version removes most CFN functions that were supported in - the Icehouse release, i.e. the ``2013-05-23`` version. So the supported - functions now are:: - - get_attr - get_file - get_param - get_resource - list_join - resource_facade - str_replace - Fn::Select +The key with value ``2014-10-16`` indicates that the YAML document is a HOT +template and it may contain features added and/or removed up until the Juno +release. This version removes most CFN functions that were supported in +the Icehouse release, i.e. the ``2013-05-23`` version. So the supported +functions now are:: + + get_attr + get_file + get_param + get_resource + list_join + resource_facade + str_replace + Fn::Select 2015-04-30 ---------- - The key with value ``2015-04-30`` indicates that the YAML document is a HOT - template and it may contain features added and/or removed up until the Kilo - release. This version adds the ``repeat`` function. So the complete list of - supported functions is:: - - get_attr - get_file - get_param - get_resource - list_join - repeat - digest - resource_facade - str_replace - Fn::Select +The key with value ``2015-04-30`` indicates that the YAML document is a HOT +template and it may contain features added and/or removed up until the Kilo +release. This version adds the ``repeat`` function. So the complete list of +supported functions is:: + + get_attr + get_file + get_param + get_resource + list_join + repeat + digest + resource_facade + str_replace + Fn::Select 2015-10-15 ---------- - The key with value ``2015-10-15`` indicates that the YAML document is a HOT - template and it may contain features added and/or removed up until the - Liberty release. This version removes the *Fn::Select* function, path based - ``get_attr``/``get_param`` references should be used instead. Moreover - ``get_attr`` since this version returns dict of all attributes for the - given resource excluding *show* attribute, if there's no - specified, e.g. :code:`{ get_attr: []}`. This version - also adds the str_split function and support for passing multiple lists to - the existing list_join function. The complete list of supported functions - is:: - - get_attr - get_file - get_param - get_resource - list_join - repeat - digest - resource_facade - str_replace - str_split +The key with value ``2015-10-15`` indicates that the YAML document is a HOT +template and it may contain features added and/or removed up until the +Liberty release. This version removes the *Fn::Select* function, path based +``get_attr``/``get_param`` references should be used instead. Moreover +``get_attr`` since this version returns dict of all attributes for the +given resource excluding *show* attribute, if there's no +specified, e.g. :code:`{ get_attr: []}`. This version +also adds the str_split function and support for passing multiple lists to +the existing list_join function. The complete list of supported functions +is:: + + get_attr + get_file + get_param + get_resource + list_join + repeat + digest + resource_facade + str_replace + str_split 2016-04-08 ---------- - The key with value ``2016-04-08`` indicates that the YAML document is a HOT - template and it may contain features added and/or removed up until the - Mitaka release. This version also adds the ``map_merge`` function which - can be used to merge the contents of maps. The complete list of supported - functions is:: - - digest - get_attr - get_file - get_param - get_resource - list_join - map_merge - repeat - resource_facade - str_replace - str_split +The key with value ``2016-04-08`` indicates that the YAML document is a HOT +template and it may contain features added and/or removed up until the +Mitaka release. This version also adds the ``map_merge`` function which +can be used to merge the contents of maps. The complete list of supported +functions is:: + + digest + get_attr + get_file + get_param + get_resource + list_join + map_merge + repeat + resource_facade + str_replace + str_split 2016-10-14 | newton ------------------- - The key with value ``2016-10-14`` or ``newton`` indicates that the YAML - document is a HOT template and it may contain features added and/or removed - up until the Newton release. This version adds the ``yaql`` function which - can be used for evaluation of complex expressions, the ``map_replace`` - function that can do key/value replacements on a mapping, and the ``if`` - function which can be used to return corresponding value based on condition - evaluation. The complete list of supported functions is:: - - digest - get_attr - get_file - get_param - get_resource - list_join - map_merge - map_replace - repeat - resource_facade - str_replace - str_split - yaql - if - - This version adds ``equals`` condition function which can be used - to compare whether two values are equal, the ``not`` condition function - which acts as a NOT operator, the ``and`` condition function which acts - as an AND operator to evaluate all the specified conditions, the ``or`` - condition function which acts as an OR operator to evaluate all the - specified conditions. The complete list of supported condition - functions is:: - - equals - get_param - not - and - or +The key with value ``2016-10-14`` or ``newton`` indicates that the YAML +document is a HOT template and it may contain features added and/or removed +up until the Newton release. This version adds the ``yaql`` function which +can be used for evaluation of complex expressions, the ``map_replace`` +function that can do key/value replacements on a mapping, and the ``if`` +function which can be used to return corresponding value based on condition +evaluation. The complete list of supported functions is:: + + digest + get_attr + get_file + get_param + get_resource + list_join + map_merge + map_replace + repeat + resource_facade + str_replace + str_split + yaql + if + +This version adds ``equals`` condition function which can be used +to compare whether two values are equal, the ``not`` condition function +which acts as a NOT operator, the ``and`` condition function which acts +as an AND operator to evaluate all the specified conditions, the ``or`` +condition function which acts as an OR operator to evaluate all the +specified conditions. The complete list of supported condition +functions is:: + + equals + get_param + not + and + or 2017-02-24 | ocata ------------------- - The key with value ``2017-02-24`` or ``ocata`` indicates that the YAML - document is a HOT template and it may contain features added and/or removed - up until the Ocata release. This version adds the ``str_replace_strict`` - function which raises errors for missing params and the ``filter`` function - which filters out values from lists. The complete list of supported - functions is:: - - digest - filter - get_attr - get_file - get_param - get_resource - list_join - map_merge - map_replace - repeat - resource_facade - str_replace - str_replace_strict - str_split - yaql - if - - The complete list of supported condition functions is:: - - equals - get_param - not - and - or +The key with value ``2017-02-24`` or ``ocata`` indicates that the YAML +document is a HOT template and it may contain features added and/or removed +up until the Ocata release. This version adds the ``str_replace_strict`` +function which raises errors for missing params and the ``filter`` function +which filters out values from lists. The complete list of supported +functions is:: + + digest + filter + get_attr + get_file + get_param + get_resource + list_join + map_merge + map_replace + repeat + resource_facade + str_replace + str_replace_strict + str_split + yaql + if + +The complete list of supported condition functions is:: + + equals + get_param + not + and + or 2017-09-01 | pike ----------------- - The key with value ``2017-09-01`` or ``pike`` indicates that the YAML - document is a HOT template and it may contain features added and/or removed - up until the Pike release. This version adds the ``make_url`` function for - assembling URLs, the ``list_concat`` function for combining multiple - lists, the ``list_concat_unique`` function for combining multiple - lists without repeating items, the ``string_replace_vstrict`` function - which raises errors for missing and empty params, and the ``contains`` - function which checks whether specific value is in a sequence. The - complete list of supported functions is:: - - digest - filter - get_attr - get_file - get_param - get_resource - list_join - make_url - list_concat - list_concat_unique - contains - map_merge - map_replace - repeat - resource_facade - str_replace - str_replace_strict - str_replace_vstrict - str_split - yaql - if - - We support 'yaql' and 'contains' as condition functions in this version. - The complete list of supported condition functions is:: - - equals - get_param - not - and - or - yaql - contains +The key with value ``2017-09-01`` or ``pike`` indicates that the YAML +document is a HOT template and it may contain features added and/or removed +up until the Pike release. This version adds the ``make_url`` function for +assembling URLs, the ``list_concat`` function for combining multiple +lists, the ``list_concat_unique`` function for combining multiple +lists without repeating items, the ``string_replace_vstrict`` function +which raises errors for missing and empty params, and the ``contains`` +function which checks whether specific value is in a sequence. The +complete list of supported functions is:: + + digest + filter + get_attr + get_file + get_param + get_resource + list_join + make_url + list_concat + list_concat_unique + contains + map_merge + map_replace + repeat + resource_facade + str_replace + str_replace_strict + str_replace_vstrict + str_split + yaql + if + +We support 'yaql' and 'contains' as condition functions in this version. +The complete list of supported condition functions is:: + + equals + get_param + not + and + or + yaql + contains 2018-03-02 | queens ------------------- - The key with value ``2018-03-02`` or ``queens`` indicates that the YAML +The key with value ``2018-03-02`` or ``queens`` indicates that the YAML +document is a HOT template and it may contain features added and/or removed +up until the Queens release. The complete list of supported functions is:: + + digest + filter + get_attr + get_file + get_param + get_resource + list_join + make_url + list_concat + list_concat_unique + contains + map_merge + map_replace + repeat + resource_facade + str_replace + str_replace_strict + str_replace_vstrict + str_split + yaql + if + +The complete list of supported condition functions is:: + + equals + get_param + not + and + or + yaql + contains + +2018-08-31 | rocky +------------------- + The key with value ``2018-08-31`` or ``rocky`` indicates that the YAML document is a HOT template and it may contain features added and/or removed up until the Queens release. The complete list of supported functions is:: diff -Nru heat-11.0.0~b1/doc/source/template_guide/software_deployment.rst heat-11.0.0~b2/doc/source/template_guide/software_deployment.rst --- heat-11.0.0~b1/doc/source/template_guide/software_deployment.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/doc/source/template_guide/software_deployment.rst 2018-06-07 22:12:28.000000000 +0000 @@ -53,9 +53,9 @@ configured config-drive or from the `Metadata service`_. How this user-data is consumed depends on the image being booted, but the most -commonly used tool for default cloud images is Cloud-init_. +commonly used tool for default cloud images is cloud-init_. -Whether the image is using Cloud-init_ or not, it should be possible to +Whether the image is using cloud-init_ or not, it should be possible to specify a shell script in the ``user_data`` property and have it be executed by the server during boot: @@ -141,7 +141,7 @@ for ``user_data_format``, it is considered legacy and ``RAW`` or ``SOFTWARE_CONFIG`` will generally be more appropriate. -For ``RAW`` the user_data is passed to Nova unmodified. For a Cloud-init_ +For ``RAW`` the user_data is passed to Nova unmodified. For a cloud-init_ enabled image, the following are both valid ``RAW`` user-data: .. code-block:: yaml @@ -357,7 +357,7 @@ user_data_format: SOFTWARE_CONFIG user_data: {get_resource: boot_script} -The resource :ref:`OS::Heat::CloudConfig` allows Cloud-init_ cloud-config to +The resource :ref:`OS::Heat::CloudConfig` allows cloud-init_ cloud-config to be represented as template YAML rather than a block string. This allows intrinsic functions to be included when building the cloud-config. This also ensures that the cloud-config is valid YAML, although no further checks for @@ -388,7 +388,7 @@ The resource :ref:`OS::Heat::MultipartMime` allows multiple :ref:`OS::Heat::SoftwareConfig` and :ref:`OS::Heat::CloudConfig` -resources to be combined into a single Cloud-init_ multi-part message: +resources to be combined into a single cloud-init_ multi-part message: .. code-block:: yaml @@ -779,14 +779,14 @@ .. _`AWS::CloudFormation::Init`: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html -.. _diskimage-builder: https://git.openstack.org/cgit/openstack/diskimage-builder -.. _imagefactory: http://imgfac.org/ -.. _`Metadata service`: http://docs.openstack.org/admin-guide/compute-networking-nova.html#metadata-service -.. _Cloud-init: http://cloudinit.readthedocs.org/en/latest/ -.. _curl: http://curl.haxx.se/ -.. _`Orchestration API`: http://developer.openstack.org/api-ref/orchestration/v1/ +.. _diskimage-builder: https://docs.openstack.org/diskimage-builder/latest/ +.. _imagefactory: https://imgfac.org/ +.. _`Metadata service`: https://docs.openstack.org/nova/latest/admin/networking-nova.html#metadata-service +.. _cloud-init: https://cloudinit.readthedocs.io/ +.. _curl: https://curl.haxx.se/ +.. _`Orchestration API`: https://developer.openstack.org/api-ref/orchestration/v1/ .. _os-refresh-config: https://git.openstack.org/cgit/openstack/os-refresh-config .. _os-apply-config: https://git.openstack.org/cgit/openstack/os-apply-config .. _tripleo-heat-templates: https://git.openstack.org/cgit/openstack/tripleo-heat-templates .. _tripleo-image-elements: https://git.openstack.org/cgit/openstack/tripleo-image-elements -.. _puppet: http://puppetlabs.com/ +.. _puppet: https://puppet.com/ diff -Nru heat-11.0.0~b1/etc/heat/api-paste.ini heat-11.0.0~b2/etc/heat/api-paste.ini --- heat-11.0.0~b1/etc/heat/api-paste.ini 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/etc/heat/api-paste.ini 2018-06-07 22:12:28.000000000 +0000 @@ -1,7 +1,7 @@ # heat-api pipeline [pipeline:heat-api] -pipeline = cors request_id faultwrap http_proxy_to_wsgi versionnegotiation osprofiler authurl authtoken context apiv1app +pipeline = cors request_id faultwrap authurl authtoken context http_proxy_to_wsgi versionnegotiation osprofiler apiv1app # heat-api pipeline for standalone heat # ie. uses alternative auth backend that authenticates users against keystone @@ -12,7 +12,7 @@ # flavor = standalone # [pipeline:heat-api-standalone] -pipeline = cors request_id faultwrap http_proxy_to_wsgi versionnegotiation authurl authpassword context apiv1app +pipeline = cors request_id faultwrap authurl authpassword context http_proxy_to_wsgi versionnegotiation apiv1app # heat-api pipeline for custom cloud backends # i.e. in heat.conf: @@ -20,23 +20,23 @@ # flavor = custombackend # [pipeline:heat-api-custombackend] -pipeline = cors request_id faultwrap versionnegotiation context custombackendauth apiv1app +pipeline = cors request_id context faultwrap versionnegotiation custombackendauth apiv1app # To enable, in heat.conf: # [paste_deploy] # flavor = noauth # [pipeline:heat-api-noauth] -pipeline = cors request_id faultwrap http_proxy_to_wsgi versionnegotiation noauth context apiv1app +pipeline = cors request_id faultwrap noauth context http_proxy_to_wsgi versionnegotiation apiv1app # heat-api-cfn pipeline [pipeline:heat-api-cfn] -pipeline = cors http_proxy_to_wsgi cfnversionnegotiation osprofiler ec2authtoken authtoken context apicfnv1app +pipeline = cors request_id ec2authtoken authtoken context http_proxy_to_wsgi cfnversionnegotiation osprofiler apicfnv1app # heat-api-cfn pipeline for standalone heat # relies exclusively on authenticating with ec2 signed requests [pipeline:heat-api-cfn-standalone] -pipeline = cors http_proxy_to_wsgi cfnversionnegotiation ec2authtoken context apicfnv1app +pipeline = cors request_id ec2authtoken context http_proxy_to_wsgi cfnversionnegotiation apicfnv1app [app:apiv1app] paste.app_factory = heat.common.wsgi:app_factory diff -Nru heat-11.0.0~b1/HACKING.rst heat-11.0.0~b2/HACKING.rst --- heat-11.0.0~b1/HACKING.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/HACKING.rst 2018-06-07 22:12:28.000000000 +0000 @@ -2,7 +2,7 @@ ======================= - Step 1: Read the OpenStack style commandments - http://docs.openstack.org/developer/hacking/ + https://docs.openstack.org/hacking/ - Step 2: Read on Heat specific commandments diff -Nru heat-11.0.0~b1/heat/cmd/manage.py heat-11.0.0~b2/heat/cmd/manage.py --- heat-11.0.0~b1/heat/cmd/manage.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/cmd/manage.py 2018-06-07 22:12:28.000000000 +0000 @@ -122,10 +122,8 @@ except exception.NotFound: raise Exception(_("Stack with id %s can not be found.") % CONF.command.stack_id) - except exception.ActionInProgress: - raise Exception(_("The stack or some of its nested stacks are " - "in progress. Note, that all the stacks should be " - "in COMPLETE state in order to be migrated.")) + except (exception.NotSupported, exception.ActionNotComplete) as ex: + raise Exception(ex.message) def purge_deleted(): diff -Nru heat-11.0.0~b1/heat/common/exception.py heat-11.0.0~b2/heat/common/exception.py --- heat-11.0.0~b1/heat/common/exception.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/common/exception.py 2018-06-07 22:12:28.000000000 +0000 @@ -487,6 +487,11 @@ "in progress.") +class ActionNotComplete(HeatException): + msg_fmt = _("Stack %(stack_name)s has an action (%(action)s) " + "in progress or failed state.") + + class StopActionFailed(HeatException): msg_fmt = _("Failed to stop stack (%(stack_name)s) on other engine " "(%(engine_id)s)") diff -Nru heat-11.0.0~b1/heat/common/grouputils.py heat-11.0.0~b2/heat/common/grouputils.py --- heat-11.0.0~b1/heat/common/grouputils.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/common/grouputils.py 2018-06-07 22:12:28.000000000 +0000 @@ -197,3 +197,21 @@ return [(name, definitions[name]) for name in inspector.member_names(include_failed=include_failed) if name in definitions] + + +def get_child_template_files(context, stack, + is_rolling_update, + old_template_id): + """Return a merged map of old and new template files. + + For rolling update files for old and new defintions are required as the + nested stack is updated in batches of scaled units. + """ + if not stack.convergence: + old_template_id = stack.t.id + + if is_rolling_update and old_template_id: + prev_files = template.Template.load(context, old_template_id).files + prev_files.update(dict(stack.t.files)) + return prev_files + return stack.t.files diff -Nru heat-11.0.0~b1/heat/common/i18n.py heat-11.0.0~b2/heat/common/i18n.py --- heat-11.0.0~b1/heat/common/i18n.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/common/i18n.py 2018-06-07 22:12:28.000000000 +0000 @@ -14,7 +14,8 @@ # limitations under the License. # It's based on oslo.i18n usage in OpenStack Keystone project and -# recommendations from http://docs.openstack.org/developer/oslo.i18n/usage.html +# recommendations from +# https://docs.openstack.org/oslo.i18n/latest/user/usage.html import six diff -Nru heat-11.0.0~b1/heat/common/wsgi.py heat-11.0.0~b2/heat/common/wsgi.py --- heat-11.0.0~b1/heat/common/wsgi.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/common/wsgi.py 2018-06-07 22:12:28.000000000 +0000 @@ -38,9 +38,8 @@ from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import importutils -from paste import deploy -import routes -import routes.middleware +from paste.deploy import loadwsgi +from routes import middleware import six import webob.dec import webob.exc @@ -693,8 +692,7 @@ mapper.connect(None, "/v1.0/{path_info:.*}", controller=BlogApp()) """ self.map = mapper - self._router = routes.middleware.RoutesMiddleware(self._dispatch, - self.map) + self._router = middleware.RoutesMiddleware(self._dispatch, self.map) @webob.dec.wsgify def __call__(self, req): @@ -848,13 +846,17 @@ # ContentType=JSON results in a JSON serialized response... content_type = request.params.get("ContentType") + LOG.info("Processing request: %(method)s %(path)s", + {'method': request.method, 'path': request.path}) + try: deserialized_request = self.dispatch(self.deserializer, action, request) action_args.update(deserialized_request) - LOG.debug(('Calling %(controller)s : %(action)s'), - {'controller': self.controller, 'action': action}) + LOG.debug(('Calling %(controller)s.%(action)s'), + {'controller': type(self.controller).__name__, + 'action': action}) action_result = self.dispatch(self.controller, action, request, **action_args) @@ -1102,6 +1104,6 @@ """ setup_paste_factories(conf) try: - return deploy.loadapp("config:%s" % paste_config_file, name=app_name) + return loadwsgi.loadapp("config:%s" % paste_config_file, name=app_name) finally: teardown_paste_factories() diff -Nru heat-11.0.0~b1/heat/db/sqlalchemy/api.py heat-11.0.0~b2/heat/db/sqlalchemy/api.py --- heat-11.0.0~b1/heat/db/sqlalchemy/api.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/db/sqlalchemy/api.py 2018-06-07 22:12:28.000000000 +0000 @@ -234,6 +234,8 @@ return results +@oslo_db_api.wrap_db_retry(max_retries=3, retry_on_deadlock=True, + retry_interval=0.5, inc_retry_interval=True) def resource_purge_deleted(context, stack_id): filters = {'stack_id': stack_id, 'action': 'DELETE', 'status': 'COMPLETE'} query = context.session.query(models.Resource) @@ -605,7 +607,8 @@ def stack_get_all_by_owner_id(context, owner_id): results = soft_delete_aware_query( - context, models.Stack).filter_by(owner_id=owner_id).all() + context, models.Stack).filter_by(owner_id=owner_id, + backup=False).all() return results @@ -1318,6 +1321,8 @@ break +@oslo_db_api.wrap_db_retry(max_retries=3, retry_on_deadlock=True, + retry_interval=0.5, inc_retry_interval=True) def _purge_stacks(stack_infos, engine, meta): """Purge some stacks and their releated events, raw_templates, etc. diff -Nru heat-11.0.0~b1/heat/engine/check_resource.py heat-11.0.0~b2/heat/engine/check_resource.py --- heat-11.0.0~b1/heat/engine/check_resource.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/check_resource.py 2018-06-07 22:12:28.000000000 +0000 @@ -56,16 +56,40 @@ self.msg_queue = msg_queue self.input_data = input_data - def _try_steal_engine_lock(self, cnxt, resource_id): + def _stale_resource_needs_retry(self, cnxt, rsrc, prev_template_id): + """Determine whether a resource needs retrying after failure to lock. + + Return True if we need to retry the check operation because of a + failure to acquire the lock. This can be either because the engine + holding the lock is no longer working, or because no other engine had + locked the resource and the data was just out of date. + + In the former case, the lock will be stolen and the resource status + changed to FAILED. + """ + fields = {'current_template_id', 'engine_id'} rs_obj = resource_objects.Resource.get_obj(cnxt, - resource_id, - fields=('engine_id', )) + rsrc.id, + refresh=True, + fields=fields) if rs_obj.engine_id not in (None, self.engine_id): if not listener_client.EngineListenerClient( rs_obj.engine_id).is_alive(cnxt): # steal the lock. rs_obj.update_and_save({'engine_id': None}) + + # set the resource state as failed + status_reason = ('Worker went down ' + 'during resource %s' % rsrc.action) + rsrc.state_set(rsrc.action, + rsrc.FAILED, + six.text_type(status_reason)) return True + elif (rs_obj.engine_id is None and + rs_obj.current_template_id == prev_template_id): + LOG.debug('Resource id=%d stale; retrying check', rsrc.id) + return True + LOG.debug('Resource id=%d modified by another traversal', rsrc.id) return False def _trigger_rollback(self, stack): @@ -110,11 +134,11 @@ self._handle_failure(cnxt, stack, failure_reason) def _handle_resource_replacement(self, cnxt, - current_traversal, new_tmpl_id, + current_traversal, new_tmpl_id, requires, rsrc, stack, adopt_stack_data): """Create a replacement resource and trigger a check on it.""" try: - new_res_id = rsrc.make_replacement(new_tmpl_id) + new_res_id = rsrc.make_replacement(new_tmpl_id, requires) except exception.UpdateInProgress: LOG.info("No replacement created - " "resource already locked by new traversal") @@ -135,34 +159,30 @@ def _do_check_resource(self, cnxt, current_traversal, tmpl, resource_data, is_update, rsrc, stack, adopt_stack_data): + prev_template_id = rsrc.current_template_id try: if is_update: + requires = set(d.primary_key for d in resource_data.values() + if d is not None) try: - check_resource_update(rsrc, tmpl.id, resource_data, + check_resource_update(rsrc, tmpl.id, requires, self.engine_id, stack, self.msg_queue) except resource.UpdateReplace: self._handle_resource_replacement(cnxt, current_traversal, - tmpl.id, + tmpl.id, requires, rsrc, stack, adopt_stack_data) return False else: - check_resource_cleanup(rsrc, tmpl.id, resource_data, - self.engine_id, + check_resource_cleanup(rsrc, tmpl.id, self.engine_id, stack.time_remaining(), self.msg_queue) return True except exception.UpdateInProgress: - if self._try_steal_engine_lock(cnxt, rsrc.id): + if self._stale_resource_needs_retry(cnxt, rsrc, prev_template_id): rpc_data = sync_point.serialize_input_data(self.input_data) - # set the resource state as failed - status_reason = ('Worker went down ' - 'during resource %s' % rsrc.action) - rsrc.state_set(rsrc.action, - rsrc.FAILED, - six.text_type(status_reason)) self._rpc_client.check_resource(cnxt, rsrc.id, current_traversal, @@ -388,22 +408,22 @@ LOG.error('Unknown message "%s" received', message) -def check_resource_update(rsrc, template_id, resource_data, engine_id, +def check_resource_update(rsrc, template_id, requires, engine_id, stack, msg_queue): """Create or update the Resource if appropriate.""" check_message = functools.partial(_check_for_message, msg_queue) if rsrc.action == resource.Resource.INIT: - rsrc.create_convergence(template_id, resource_data, engine_id, + rsrc.create_convergence(template_id, requires, engine_id, stack.time_remaining(), check_message) else: - rsrc.update_convergence(template_id, resource_data, engine_id, + rsrc.update_convergence(template_id, requires, engine_id, stack.time_remaining(), stack, check_message) -def check_resource_cleanup(rsrc, template_id, resource_data, engine_id, +def check_resource_cleanup(rsrc, template_id, engine_id, timeout, msg_queue): """Delete the Resource if appropriate.""" check_message = functools.partial(_check_for_message, msg_queue) - rsrc.delete_convergence(template_id, resource_data, engine_id, timeout, + rsrc.delete_convergence(template_id, engine_id, timeout, check_message) diff -Nru heat-11.0.0~b1/heat/engine/clients/os/designate.py heat-11.0.0~b2/heat/engine/clients/os/designate.py --- heat-11.0.0~b1/heat/engine/clients/os/designate.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/clients/os/designate.py 2018-06-07 22:12:28.000000000 +0000 @@ -97,7 +97,10 @@ return self.client().records.update(record.domain_id, record) def record_delete(self, **kwargs): - domain_id = self.get_domain_id(kwargs.pop('domain')) + try: + domain_id = self.get_domain_id(kwargs.pop('domain')) + except heat_exception.EntityNotFound: + return return self.client().records.delete(domain_id, kwargs.pop('id')) diff -Nru heat-11.0.0~b1/heat/engine/hot/template.py heat-11.0.0~b2/heat/engine/hot/template.py --- heat-11.0.0~b1/heat/engine/hot/template.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/hot/template.py 2018-06-07 22:12:28.000000000 +0000 @@ -697,3 +697,66 @@ } param_schema_class = parameters.HOTParamSchema20180302 + + +class HOTemplate20180831(HOTemplate20180302): + functions = { + 'get_attr': hot_funcs.GetAttAllAttributes, + 'get_file': hot_funcs.GetFile, + 'get_param': hot_funcs.GetParam, + 'get_resource': hot_funcs.GetResource, + 'list_join': hot_funcs.JoinMultiple, + 'repeat': hot_funcs.RepeatWithNestedLoop, + 'resource_facade': hot_funcs.ResourceFacade, + 'str_replace': hot_funcs.ReplaceJson, + + # functions added in 2015-04-30 + 'digest': hot_funcs.Digest, + + # functions added in 2015-10-15 + 'str_split': hot_funcs.StrSplit, + + # functions added in 2016-04-08 + 'map_merge': hot_funcs.MapMerge, + + # functions added in 2016-10-14 + 'yaql': hot_funcs.Yaql, + 'map_replace': hot_funcs.MapReplace, + 'if': hot_funcs.If, + + # functions added in 2017-02-24 + 'filter': hot_funcs.Filter, + 'str_replace_strict': hot_funcs.ReplaceJsonStrict, + + # functions added in 2017-09-01 + 'make_url': hot_funcs.MakeURL, + 'list_concat': hot_funcs.ListConcat, + 'str_replace_vstrict': hot_funcs.ReplaceJsonVeryStrict, + 'list_concat_unique': hot_funcs.ListConcatUnique, + 'contains': hot_funcs.Contains, + + # functions removed from 2015-10-15 + 'Fn::Select': hot_funcs.Removed, + + # functions removed from 2014-10-16 + 'Fn::GetAZs': hot_funcs.Removed, + 'Fn::Join': hot_funcs.Removed, + 'Fn::Split': hot_funcs.Removed, + 'Fn::Replace': hot_funcs.Removed, + 'Fn::Base64': hot_funcs.Removed, + 'Fn::MemberListToMap': hot_funcs.Removed, + 'Fn::ResourceFacade': hot_funcs.Removed, + 'Ref': hot_funcs.Removed, + } + + condition_functions = { + 'get_param': hot_funcs.GetParam, + 'equals': hot_funcs.Equals, + 'not': hot_funcs.Not, + 'and': hot_funcs.And, + 'or': hot_funcs.Or, + + # functions added in 2017-09-01 + 'yaql': hot_funcs.Yaql, + 'contains': hot_funcs.Contains + } diff -Nru heat-11.0.0~b1/heat/engine/resource.py heat-11.0.0~b2/heat/engine/resource.py --- heat-11.0.0~b1/heat/engine/resource.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resource.py 2018-06-07 22:12:28.000000000 +0000 @@ -243,10 +243,11 @@ self.updated_time = stack.updated_time self._rpc_client = None self.needed_by = [] - self.requires = [] + self.requires = set() self.replaces = None self.replaced_by = None self.current_template_id = None + self.old_template_id = None self.root_stack_id = None self._calling_engine_id = None self._atomic_key = None @@ -291,7 +292,7 @@ self.created_time = resource.created_at self.updated_time = resource.updated_at self.needed_by = resource.needed_by - self.requires = resource.requires + self.requires = set(resource.requires) self.replaces = resource.replaces self.replaced_by = resource.replaced_by self.current_template_id = resource.current_template_id @@ -378,7 +379,7 @@ curr_stack.defn = latest_stk_defn return resource, initial_stk_defn, curr_stack - def make_replacement(self, new_tmpl_id): + def make_replacement(self, new_tmpl_id, requires): """Create a replacement resource in the database. Returns the DB ID of the new resource, or None if the new resource @@ -392,7 +393,7 @@ 'name': self.name, 'rsrc_prop_data_id': None, 'needed_by': self.needed_by, - 'requires': self.requires, + 'requires': sorted(requires, reverse=True), 'replaces': self.id, 'action': self.INIT, 'status': self.COMPLETE, @@ -1154,14 +1155,11 @@ """ return self - def create_convergence(self, template_id, resource_data, engine_id, + def create_convergence(self, template_id, requires, engine_id, timeout, progress_callback=None): """Creates the resource by invoking the scheduler TaskRunner.""" self._calling_engine_id = engine_id - self.requires = list( - set(data.primary_key for data in resource_data.values() - if data is not None) - ) + self.requires = requires self.current_template_id = template_id if self.stack.adopt_stack_data is None: runner = scheduler.TaskRunner(self.create) @@ -1412,7 +1410,7 @@ else: raise UpdateReplace(self.name) - def update_convergence(self, template_id, resource_data, engine_id, + def update_convergence(self, template_id, new_requires, engine_id, timeout, new_stack, progress_callback=None): """Update the resource synchronously. @@ -1421,17 +1419,6 @@ resource_data and existing resource's requires, then updates the resource by invoking the scheduler TaskRunner. """ - def update_templ_id_and_requires(persist=True): - self.current_template_id = template_id - self.requires = list( - set(data.primary_key for data in resource_data.values() - if data is not None) - ) - if not persist: - return - - self.store(lock=self.LOCK_RESPECT) - self._calling_engine_id = engine_id # Check that the resource type matches. If the type has changed by a @@ -1455,19 +1442,12 @@ self.state_set(self.UPDATE, self.FAILED, six.text_type(failure)) raise failure + self.replaced_by = None - runner = scheduler.TaskRunner( - self.update, new_res_def, - update_templ_func=update_templ_id_and_requires) - try: - runner(timeout=timeout, progress_callback=progress_callback) - except UpdateReplace: - raise - except exception.UpdateInProgress: - raise - except BaseException: - with excutils.save_and_reraise_exception(): - update_templ_id_and_requires(persist=True) + runner = scheduler.TaskRunner(self.update, new_res_def, + new_template_id=template_id, + new_requires=new_requires) + runner(timeout=timeout, progress_callback=progress_callback) def preview_update(self, after, before, after_props, before_props, prev_resource, check_init_complete=False): @@ -1584,9 +1564,24 @@ return is_substituted return False + def _persist_update_no_change(self, new_template_id): + """Persist an update where the resource is unchanged.""" + if new_template_id is not None: + self.current_template_id = new_template_id + lock = (self.LOCK_RESPECT if self.stack.convergence + else self.LOCK_NONE) + if self.status == self.FAILED: + status_reason = _('Update status to COMPLETE for ' + 'FAILED resource neither update ' + 'nor replace.') + self.state_set(self.action, self.COMPLETE, + status_reason, lock=lock) + elif new_template_id is not None: + self.store(lock=lock) + @scheduler.wrappertask def update(self, after, before=None, prev_resource=None, - update_templ_func=None): + new_template_id=None, new_requires=None): """Return a task to update the resource. Subclasses should provide a handle_update() method to customise update, @@ -1607,8 +1602,7 @@ raise exception.ResourceFailure(exc, self, action) elif after_external_id is not None: LOG.debug("Skip update on external resource.") - if update_templ_func is not None: - update_templ_func(persist=True) + self._persist_update_no_change(new_template_id) return after_props, before_props = self._prepare_update_props(after, before) @@ -1640,16 +1634,7 @@ raise failure if not needs_update: - if update_templ_func is not None: - update_templ_func(persist=True) - if self.status == self.FAILED: - status_reason = _('Update status to COMPLETE for ' - 'FAILED resource neither update ' - 'nor replace.') - lock = (self.LOCK_RESPECT if self.stack.convergence - else self.LOCK_NONE) - self.state_set(self.action, self.COMPLETE, - status_reason, lock=lock) + self._persist_update_no_change(new_template_id) return if not self.stack.convergence: @@ -1664,10 +1649,14 @@ self.updated_time = datetime.utcnow() + if new_requires is not None: + self.requires = self.requires | new_requires + with self._action_recorder(action, UpdateReplace): after_props.validate() self.properties = before_props tmpl_diff = self.update_template_diff(after.freeze(), before) + self.old_template_id = self.current_template_id try: if tmpl_diff and self.needs_replace_with_tmpl_diff(tmpl_diff): @@ -1675,19 +1664,24 @@ prop_diff = self.update_template_diff_properties(after_props, before_props) + + if new_template_id is not None: + self.current_template_id = new_template_id + yield self.action_handler_task(action, args=[after, tmpl_diff, prop_diff]) except UpdateReplace: with excutils.save_and_reraise_exception(): + self.current_template_id = self.old_template_id + self.old_template_id = None self._prepare_update_replace(action) self.t = after self.reparse() self._update_stored_properties() - if update_templ_func is not None: - # template/requires will be persisted by _action_recorder() - update_templ_func(persist=False) + if new_requires is not None: + self.requires = new_requires yield self._break_if_required( self.UPDATE, environment.HOOK_POST_UPDATE) @@ -1919,7 +1913,7 @@ expected_engine_id=None): self._incr_atomic_key(self._atomic_key) - def delete_convergence(self, template_id, input_data, engine_id, timeout, + def delete_convergence(self, template_id, engine_id, timeout, progress_callback=None): """Destroys the resource if it doesn't belong to given template. @@ -1932,8 +1926,7 @@ the replacement resource's needed_by and replaces fields. """ self._calling_engine_id = engine_id - self.needed_by = list(set(v for v in input_data.values() - if v is not None)) + self.needed_by = [] if self.current_template_id != template_id: # just delete the resources in INIT state @@ -2081,7 +2074,7 @@ 'rsrc_prop_data_id': self._create_or_replace_rsrc_prop_data(), 'needed_by': self.needed_by, - 'requires': self.requires, + 'requires': sorted(self.requires, reverse=True), 'replaces': self.replaces, 'replaced_by': self.replaced_by, 'current_template_id': self.current_template_id, @@ -2345,6 +2338,7 @@ self.context, self.id, self._atomic_key, self.attributes.cached_attrs, self._attr_data_id) if attr_data_id is not None: + self._incr_atomic_key(self._atomic_key) self._attr_data_id = attr_data_id except Exception as ex: LOG.error('store_attributes rsrc %(name)s %(id)s DB error %(ex)s', diff -Nru heat-11.0.0~b1/heat/engine/resources/aws/ec2/internet_gateway.py heat-11.0.0~b2/heat/engine/resources/aws/ec2/internet_gateway.py --- heat-11.0.0~b1/heat/engine/resources/aws/ec2/internet_gateway.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/aws/ec2/internet_gateway.py 2018-06-07 22:12:28.000000000 +0000 @@ -101,12 +101,20 @@ default_client_name = 'neutron' - def _vpc_route_tables(self): + def _vpc_route_tables(self, ignore_errors=False): for res in six.itervalues(self.stack): - if (res.has_interface('AWS::EC2::RouteTable') and - res.properties.get(route_table.RouteTable.VPC_ID) == - self.properties.get(self.VPC_ID)): - yield res + if res.has_interface('AWS::EC2::RouteTable'): + try: + vpc_id = self.properties[self.VPC_ID] + rt_vpc_id = res.properties.get( + route_table.RouteTable.VPC_ID) + except (ValueError, TypeError): + if ignore_errors: + continue + else: + raise + if rt_vpc_id == vpc_id: + yield res def add_dependencies(self, deps): super(VPCGatewayAttachment, self).add_dependencies(deps) @@ -114,7 +122,9 @@ # VpcId as this VpcId. # All route tables must exist before gateway attachment # as attachment happens to routers (not VPCs) - for route_tbl in self._vpc_route_tables(): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + for route_tbl in self._vpc_route_tables(ignore_errors=True): deps += (self, route_tbl) def handle_create(self): diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/heat/autoscaling_group.py heat-11.0.0~b2/heat/engine/resources/openstack/heat/autoscaling_group.py --- heat-11.0.0~b1/heat/engine/resources/openstack/heat/autoscaling_group.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/heat/autoscaling_group.py 2018-06-07 22:12:28.000000000 +0000 @@ -186,6 +186,12 @@ resource_def)) return rsrc_defn.ResourceDefinition(None, **defn_data) + def child_template_files(self, child_env): + is_update = self.action == self.UPDATE + return grouputils.get_child_template_files(self.context, self.stack, + is_update, + self.old_template_id) + def _try_rolling_update(self, prop_diff): if self.RESOURCE in prop_diff: policy = self.properties[self.ROLLING_UPDATES] diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/heat/instance_group.py heat-11.0.0~b2/heat/engine/resources/openstack/heat/instance_group.py --- heat-11.0.0~b1/heat/engine/resources/openstack/heat/instance_group.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/heat/instance_group.py 2018-06-07 22:12:28.000000000 +0000 @@ -478,6 +478,13 @@ num_instances = int(self.properties[self.SIZE]) return self._create_template(num_instances) + def child_template_files(self, child_env): + is_rolling_update = (self.action == self.UPDATE and + self.update_policy[self.ROLLING_UPDATE]) + return grouputils.get_child_template_files(self.context, self.stack, + is_rolling_update, + self.old_template_id) + def child_params(self): """Return the environment for the nested stack.""" return { diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/heat/resource_group.py heat-11.0.0~b2/heat/engine/resources/openstack/heat/resource_group.py --- heat-11.0.0~b1/heat/engine/resources/openstack/heat/resource_group.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/heat/resource_group.py 2018-06-07 22:12:28.000000000 +0000 @@ -668,6 +668,14 @@ self._add_output_defns_to_template(tmpl, [k for k, d in definitions]) return tmpl + def child_template_files(self, child_env): + is_rolling_update = (self.action == self.UPDATE + and self.update_policy[self.ROLLING_UPDATE]) + return grouputils.get_child_template_files(self.context, + self.stack, + is_rolling_update, + self.old_template_id) + def _assemble_for_rolling_update(self, total_capacity, max_updates, include_all=False, template_version=('heat_template_version', diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/heat/test_resource.py heat-11.0.0~b2/heat/engine/resources/openstack/heat/test_resource.py --- heat-11.0.0~b1/heat/engine/resources/openstack/heat/test_resource.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/heat/test_resource.py 2018-06-07 22:12:28.000000000 +0000 @@ -178,8 +178,8 @@ def needs_replace_with_prop_diff(self, changed_properties_set, after_props, before_props): - if self.UPDATE_REPLACE in changed_properties_set: - return bool(after_props.get(self.UPDATE_REPLACE)) + if self.VALUE in changed_properties_set: + return after_props[self.UPDATE_REPLACE] def handle_update(self, json_snippet, tmpl_diff, prop_diff): self.properties = json_snippet.properties(self.properties_schema, diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/neutron/extraroute.py heat-11.0.0~b2/heat/engine/resources/openstack/neutron/extraroute.py --- heat-11.0.0~b1/heat/engine/resources/openstack/neutron/extraroute.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/neutron/extraroute.py 2018-06-07 22:12:28.000000000 +0000 @@ -19,6 +19,7 @@ from heat.engine import constraints from heat.engine import properties from heat.engine.resources.openstack.neutron import neutron +from heat.engine.resources.openstack.neutron import router from heat.engine import support @@ -65,16 +66,29 @@ # depend on any RouterInterface in this template with the same # router_id as this router_id if resource.has_interface('OS::Neutron::RouterInterface'): - router_id = self.properties[self.ROUTER_ID] - dep_router_id = resource.properties['router'] + try: + router_id = self.properties[self.ROUTER_ID] + dep_router_id = resource.properties.get( + router.RouterInterface.ROUTER) + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + continue if dep_router_id == router_id: deps += (self, resource) # depend on any RouterGateway in this template with the same # router_id as this router_id - elif (resource.has_interface('OS::Neutron::RouterGateway') and - resource.properties['router_id'] == - self.properties['router_id']): - deps += (self, resource) + elif resource.has_interface('OS::Neutron::RouterGateway'): + try: + router_id = self.properties[self.ROUTER_ID] + dep_router_id = resource.properties.get( + router.RouterGateway.ROUTER_ID) + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + continue + if dep_router_id == router_id: + deps += (self, resource) def handle_create(self): router_id = self.properties.get(self.ROUTER_ID) diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/neutron/floatingip.py heat-11.0.0~b2/heat/engine/resources/openstack/neutron/floatingip.py --- heat-11.0.0~b1/heat/engine/resources/openstack/neutron/floatingip.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/neutron/floatingip.py 2018-06-07 22:12:28.000000000 +0000 @@ -203,7 +203,12 @@ if not resource.has_interface('OS::Neutron::Port'): return False - fixed_ips = resource.properties.get(port.Port.FIXED_IPS) + try: + fixed_ips = resource.properties.get(port.Port.FIXED_IPS) + except (ValueError, TypeError): + # Properties errors will be caught later in validation, where + # we can report them in their proper context. + return False if not fixed_ips: # During create we have only unresolved value for # functions, so can not use None value for building @@ -214,15 +219,24 @@ if subnet is None: return True - p_net = (resource.properties.get(port.Port.NETWORK) or - resource.properties.get(port.Port.NETWORK_ID)) + try: + p_net = (resource.properties.get(port.Port.NETWORK) or + resource.properties.get(port.Port.NETWORK_ID)) + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + return False if p_net: network = self.client().show_network(p_net)['network'] return subnet in network['subnets'] else: - for fixed_ip in resource.properties.get( - port.Port.FIXED_IPS): - + try: + fixed_ips = resource.properties.get(port.Port.FIXED_IPS) + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + return False + for fixed_ip in fixed_ips: port_subnet = (fixed_ip.get(port.Port.FIXED_IP_SUBNET) or fixed_ip.get(port.Port.FIXED_IP_SUBNET_ID)) if subnet == port_subnet: @@ -244,10 +258,16 @@ # depend on any RouterGateway in this template with the same # network_id as this floating_network_id if resource.has_interface('OS::Neutron::RouterGateway'): - gateway_network = resource.properties.get( - router.RouterGateway.NETWORK) or resource.properties.get( - router.RouterGateway.NETWORK_ID) - floating_network = self.properties[self.FLOATING_NETWORK] + try: + gateway_network = ( + resource.properties.get(router.RouterGateway.NETWORK) + or resource.properties.get( + router.RouterGateway.NETWORK_ID)) + floating_network = self.properties[self.FLOATING_NETWORK] + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + continue if gateway_network == floating_network: deps += (self, resource) @@ -260,12 +280,17 @@ # this template with the same network_id as this # floating_network_id elif resource.has_interface('OS::Neutron::Router'): - gateway = resource.properties.get( - router.Router.EXTERNAL_GATEWAY) + try: + gateway = resource.properties.get( + router.Router.EXTERNAL_GATEWAY) + floating_network = self.properties[self.FLOATING_NETWORK] + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + continue if gateway: gateway_network = gateway.get( router.Router.EXTERNAL_GATEWAY_NETWORK) - floating_network = self.properties[self.FLOATING_NETWORK] if gateway_network == floating_network: deps += (self, resource) diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/neutron/port.py heat-11.0.0~b2/heat/engine/resources/openstack/neutron/port.py --- heat-11.0.0~b1/heat/engine/resources/openstack/neutron/port.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/neutron/port.py 2018-06-07 22:12:28.000000000 +0000 @@ -422,8 +422,13 @@ # the ports in that network. for res in six.itervalues(self.stack): if res.has_interface('OS::Neutron::Subnet'): - dep_network = res.properties.get(subnet.Subnet.NETWORK) - network = self.properties[self.NETWORK] + try: + dep_network = res.properties.get(subnet.Subnet.NETWORK) + network = self.properties[self.NETWORK] + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + continue if dep_network == network: deps += (self, res) diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/neutron/provider_net.py heat-11.0.0~b2/heat/engine/resources/openstack/neutron/provider_net.py --- heat-11.0.0~b1/heat/engine/resources/openstack/neutron/provider_net.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/neutron/provider_net.py 2018-06-07 22:12:28.000000000 +0000 @@ -51,6 +51,12 @@ 'status', 'subnets', ) + NETWORK_TYPES = ( + LOCAL, VLAN, VXLAN, GRE, GENEVE, FLAT + ) = ( + 'local', 'vlan', 'vxlan', 'gre', 'geneve', 'flat' + ) + properties_schema = { NAME: net.Net.properties_schema[NAME], PROVIDER_NETWORK_TYPE: properties.Schema( @@ -60,7 +66,7 @@ update_allowed=True, required=True, constraints=[ - constraints.AllowedValues(['vlan', 'flat']), + constraints.AllowedValues(NETWORK_TYPES), ] ), PROVIDER_PHYSICAL_NETWORK: properties.Schema( diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/neutron/router.py heat-11.0.0~b2/heat/engine/resources/openstack/neutron/router.py --- heat-11.0.0~b1/heat/engine/resources/openstack/neutron/router.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/neutron/router.py 2018-06-07 22:12:28.000000000 +0000 @@ -265,7 +265,12 @@ external_gw_net = external_gw.get(self.EXTERNAL_GATEWAY_NETWORK) for res in six.itervalues(self.stack): if res.has_interface('OS::Neutron::Subnet'): - subnet_net = res.properties.get(subnet.Subnet.NETWORK) + try: + subnet_net = res.properties.get(subnet.Subnet.NETWORK) + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + continue if subnet_net == external_gw_net: deps += (self, res) @@ -633,16 +638,26 @@ # depend on any RouterInterface in this template with the same # router_id as this router_id if resource.has_interface('OS::Neutron::RouterInterface'): - dep_router_id = resource.properties[RouterInterface.ROUTER] - router_id = self.properties[self.ROUTER_ID] + try: + dep_router_id = resource.properties[RouterInterface.ROUTER] + router_id = self.properties[self.ROUTER_ID] + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + continue if dep_router_id == router_id: deps += (self, resource) # depend on any subnet in this template with the same network_id # as this network_id, as the gateway implicitly creates a port # on that subnet if resource.has_interface('OS::Neutron::Subnet'): - dep_network = resource.properties[subnet.Subnet.NETWORK] - network = self.properties[self.NETWORK] + try: + dep_network = resource.properties[subnet.Subnet.NETWORK] + network = self.properties[self.NETWORK] + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + continue if dep_network == network: deps += (self, resource) diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/nova/server_network_mixin.py heat-11.0.0~b2/heat/engine/resources/openstack/nova/server_network_mixin.py --- heat-11.0.0~b1/heat/engine/resources/openstack/nova/server_network_mixin.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/nova/server_network_mixin.py 2018-06-07 22:12:28.000000000 +0000 @@ -579,7 +579,23 @@ port=port['id'], server=prev_server_id) def prepare_ports_for_replace(self): - self.detach_ports(self) + # Check that the interface can be detached + server = None + # TODO(TheJulia): Once Story #2002001 is underway, + # we should be able to replace the query to nova and + # the check for the failed status with just a check + # to see if the resource has failed. + with self.client_plugin().ignore_not_found: + server = self.client().servers.get(self.resource_id) + if server and server.status != 'ERROR': + self.detach_ports(self) + else: + # If we are replacing an ERROR'ed node, we need to delete + # internal ports that we have created, otherwise we can + # encounter deployment issues with duplicate internal + # port data attempting to be created in instances being + # deployed. + self._delete_internal_ports() def restore_ports_after_rollback(self, convergence): # In case of convergence, during rollback, the previous rsrc is diff -Nru heat-11.0.0~b1/heat/engine/resources/openstack/nova/server.py heat-11.0.0~b2/heat/engine/resources/openstack/nova/server.py --- heat-11.0.0~b1/heat/engine/resources/openstack/nova/server.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/openstack/nova/server.py 2018-06-07 22:12:28.000000000 +0000 @@ -1160,12 +1160,22 @@ # It is not known which subnet a server might be assigned # to so all subnets in a network should be created before # the servers in that network. - nets = self.properties[self.NETWORKS] + try: + nets = self.properties[self.NETWORKS] + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + return if not nets: return for res in six.itervalues(self.stack): if res.has_interface('OS::Neutron::Subnet'): - subnet_net = res.properties.get(subnet.Subnet.NETWORK) + try: + subnet_net = res.properties.get(subnet.Subnet.NETWORK) + except (ValueError, TypeError): + # Properties errors will be caught later in validation, + # where we can report them in their proper context. + continue # Be wary of the case where we do not know a subnet's # network. If that's the case, be safe and add it as a # dependency. diff -Nru heat-11.0.0~b1/heat/engine/resources/stack_resource.py heat-11.0.0~b2/heat/engine/resources/stack_resource.py --- heat-11.0.0~b1/heat/engine/resources/stack_resource.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/resources/stack_resource.py 2018-06-07 22:12:28.000000000 +0000 @@ -222,7 +222,12 @@ if isinstance(parsed_child_template, template.Template): parsed_child_template = parsed_child_template.t return template.Template(parsed_child_template, - files=self.stack.t.files, env=child_env) + files=self.child_template_files(child_env), + env=child_env) + + def child_template_files(self, child_env): + """Default implementation to get the files map for child template.""" + return self.stack.t.files def _parse_nested_stack(self, stack_name, child_template, child_params, timeout_mins=None, diff -Nru heat-11.0.0~b1/heat/engine/service.py heat-11.0.0~b2/heat/engine/service.py --- heat-11.0.0~b1/heat/engine/service.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/service.py 2018-06-07 22:12:28.000000000 +0000 @@ -88,8 +88,8 @@ self.msg_queues = collections.defaultdict(list) # Create dummy service task, because when there is nothing queued - # on self.tg the process exits - self.add_timer(cfg.CONF.periodic_interval, self._service_task) + # on any of the service's ThreadGroups, the process exits. + self.add_timer(None, self._service_task) def _service_task(self): """Dummy task which gets queued on the service.Service threadgroup. @@ -2207,14 +2207,30 @@ parent_stack = parser.Stack.load(ctxt, stack_id=stack_id, show_deleted=False) + + if parent_stack.owner_id is not None: + msg = _("Migration of nested stack %s") % stack_id + raise exception.NotSupported(feature=msg) + + if parent_stack.status != parent_stack.COMPLETE: + raise exception.ActionNotComplete(stack_name=parent_stack.name, + action=parent_stack.action) + if parent_stack.convergence: LOG.info("Convergence was already enabled for stack %s", stack_id) return + db_stacks = stack_object.Stack.get_all_by_root_owner_id( ctxt, parent_stack.id) stacks = [parser.Stack.load(ctxt, stack_id=st.id, stack=st) for st in db_stacks] + + # check if any of the nested stacks is in IN_PROGRESS/FAILED state + for stack in stacks: + if stack.status != stack.COMPLETE: + raise exception.ActionNotComplete(stack_name=stack.name, + action=stack.action) stacks.append(parent_stack) locks = [] try: diff -Nru heat-11.0.0~b1/heat/engine/stack.py heat-11.0.0~b2/heat/engine/stack.py --- heat-11.0.0~b1/heat/engine/stack.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/stack.py 2018-06-07 22:12:28.000000000 +0000 @@ -514,7 +514,11 @@ try: res.add_dependencies(deps) except Exception as exc: - if not ignore_errors: + # Always ignore ValueError/TypeError, as they're likely to + # have come from trying to read invalid property values that + # haven't been validated yet. + if not (ignore_errors or + isinstance(exc, (ValueError, TypeError))): raise else: LOG.warning('Ignoring error adding implicit ' @@ -1364,8 +1368,8 @@ 'action': self.action}) return - LOG.info('convergence_dependencies: %s', - self.convergence_dependencies) + LOG.debug('Starting traversal %s with dependencies: %s', + self.current_traversal, self.convergence_dependencies) # create sync_points for resources in DB for rsrc_id, is_update in self.convergence_dependencies: diff -Nru heat-11.0.0~b1/heat/engine/template.py heat-11.0.0~b2/heat/engine/template.py --- heat-11.0.0~b1/heat/engine/template.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/engine/template.py 2018-06-07 22:12:28.000000000 +0000 @@ -301,7 +301,7 @@ # TODO(kanagaraj-manickam) currently t_digest is stored in self. which # is used to check whether already template is validated or not. # But it needs to be loaded from dogpile cache backend once its - # available in heat (http://specs.openstack.org/openstack/heat-specs/ + # available in heat (https://specs.openstack.org/openstack/heat-specs/ # specs/liberty/constraint-validation-cache.html). This is required # as multiple heat-engines may process the same template at least # in case of instance_group. And it fixes partially bug 1444316 diff -Nru heat-11.0.0~b1/heat/tests/api/openstack_v1/test_stacks.py heat-11.0.0~b2/heat/tests/api/openstack_v1/test_stacks.py --- heat-11.0.0~b1/heat/tests/api/openstack_v1/test_stacks.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/api/openstack_v1/test_stacks.py 2018-06-07 22:12:28.000000000 +0000 @@ -113,12 +113,11 @@ body = {'template_url': url} data = stacks.InstantiationData(body) - self.m.StubOutWithMock(urlfetch, 'get') - urlfetch.get(url).AndReturn(json.dumps(template)) - self.m.ReplayAll() + mock_get = self.patchobject(urlfetch, 'get', + return_value=json.dumps(template)) self.assertEqual(template, data.template()) - self.m.VerifyAll() + mock_get.assert_called_once_with(url) def test_template_priority(self): template = {'foo': 'bar', 'blarg': 'wibble'} @@ -126,11 +125,10 @@ body = {'template': template, 'template_url': url} data = stacks.InstantiationData(body) - self.m.StubOutWithMock(urlfetch, 'get') - self.m.ReplayAll() + mock_get = self.patchobject(urlfetch, 'get') self.assertEqual(template, data.template()) - self.m.VerifyAll() + mock_get.assert_not_called() def test_template_missing(self): template = {'foo': 'bar', 'blarg': 'wibble'} @@ -734,8 +732,19 @@ req = self._post('/stacks', json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + response = self.controller.create(req, + tenant_id=identity.tenant, + body=body) + + expected = {'stack': + {'id': '1', + 'links': [{'href': self._url(identity), 'rel': 'self'}]}} + self.assertEqual(expected, response) + + mock_call.assert_called_once_with( req.context, ('create_stack', {'stack_name': identity.stack_name, @@ -755,19 +764,7 @@ 'stack_user_project_id': None, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - response = self.controller.create(req, - tenant_id=identity.tenant, - body=body) - - expected = {'stack': - {'id': '1', - 'links': [{'href': self._url(identity), 'rel': 'self'}]}} - self.assertEqual(expected, response) - - self.m.VerifyAll() + ) def test_create_with_tags(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) @@ -782,8 +779,19 @@ req = self._post('/stacks', json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + response = self.controller.create(req, + tenant_id=identity.tenant, + body=body) + + expected = {'stack': + {'id': '1', + 'links': [{'href': self._url(identity), 'rel': 'self'}]}} + self.assertEqual(expected, response) + + mock_call.assert_called_once_with( req.context, ('create_stack', {'stack_name': identity.stack_name, @@ -803,18 +811,7 @@ 'stack_user_project_id': None, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - response = self.controller.create(req, - tenant_id=identity.tenant, - body=body) - - expected = {'stack': - {'id': '1', - 'links': [{'href': self._url(identity), 'rel': 'self'}]}} - self.assertEqual(expected, response) - self.m.VerifyAll() + ) def test_adopt(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) @@ -847,8 +844,19 @@ req = self._post('/stacks', json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + response = self.controller.create(req, + tenant_id=identity.tenant, + body=body) + + expected = {'stack': + {'id': '1', + 'links': [{'href': self._url(identity), 'rel': 'self'}]}} + self.assertEqual(expected, response) + + mock_call.assert_called_once_with( req.context, ('create_stack', {'stack_name': identity.stack_name, @@ -869,18 +877,7 @@ 'stack_user_project_id': None, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - response = self.controller.create(req, - tenant_id=identity.tenant, - body=body) - - expected = {'stack': - {'id': '1', - 'links': [{'href': self._url(identity), 'rel': 'self'}]}} - self.assertEqual(expected, response) - self.m.VerifyAll() + ) def test_adopt_timeout_not_int(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) @@ -901,7 +898,7 @@ self.assertEqual("Only integer is acceptable by 'timeout_mins'.", six.text_type(ex)) - self.assertFalse(mock_call.called) + mock_call.assert_not_called() def test_adopt_error(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) @@ -916,7 +913,6 @@ req = self._post('/stacks', json.dumps(body)) - self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, tenant_id=self.tenant, @@ -924,7 +920,6 @@ self.assertEqual(400, resp.status_code) self.assertEqual('400 Bad Request', resp.status) self.assertIn('Invalid adopt data', resp.text) - self.m.VerifyAll() def test_create_with_files(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) @@ -939,8 +934,18 @@ req = self._post('/stacks', json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + result = self.controller.create(req, + tenant_id=identity.tenant, + body=body) + expected = {'stack': + {'id': '1', + 'links': [{'href': self._url(identity), 'rel': 'self'}]}} + self.assertEqual(expected, result) + + mock_call.assert_called_once_with( req.context, ('create_stack', {'stack_name': identity.stack_name, @@ -960,18 +965,7 @@ 'stack_user_project_id': None, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - result = self.controller.create(req, - tenant_id=identity.tenant, - body=body) - expected = {'stack': - {'id': '1', - 'links': [{'href': self._url(identity), 'rel': 'self'}]}} - self.assertEqual(expected, result) - - self.m.VerifyAll() + ) def test_create_err_rpcerr(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True, 3) @@ -987,71 +981,14 @@ unknown_parameter = heat_exc.UnknownUserParameter(key='a') missing_parameter = heat_exc.UserParameterMissing(key='a') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('create_stack', - {'stack_name': stack_name, - 'template': template, - 'params': {'parameters': parameters, - 'encrypted_param_names': [], - 'parameter_defaults': {}, - 'event_sinks': [], - 'resource_registry': {}}, - 'files': {}, - 'environment_files': None, - 'args': {'timeout_mins': 30}, - 'owner_id': None, - 'nested_depth': 0, - 'user_creds_id': None, - 'parent_resource_name': None, - 'stack_user_project_id': None, - 'template_id': None}), - version='1.29' - ).AndRaise(tools.to_remote_error(AttributeError())) - rpc_client.EngineClient.call( - req.context, - ('create_stack', - {'stack_name': stack_name, - 'template': template, - 'params': {'parameters': parameters, - 'encrypted_param_names': [], - 'parameter_defaults': {}, - 'event_sinks': [], - 'resource_registry': {}}, - 'files': {}, - 'environment_files': None, - 'args': {'timeout_mins': 30}, - 'owner_id': None, - 'nested_depth': 0, - 'user_creds_id': None, - 'parent_resource_name': None, - 'stack_user_project_id': None, - 'template_id': None}), - version='1.29' - ).AndRaise(tools.to_remote_error(unknown_parameter)) - rpc_client.EngineClient.call( - req.context, - ('create_stack', - {'stack_name': stack_name, - 'template': template, - 'params': {'parameters': parameters, - 'encrypted_param_names': [], - 'parameter_defaults': {}, - 'event_sinks': [], - 'resource_registry': {}}, - 'files': {}, - 'environment_files': None, - 'args': {'timeout_mins': 30}, - 'owner_id': None, - 'nested_depth': 0, - 'user_creds_id': None, - 'parent_resource_name': None, - 'stack_user_project_id': None, - 'template_id': None}), - version='1.29' - ).AndRaise(tools.to_remote_error(missing_parameter)) - self.m.ReplayAll() + mock_call = self.patchobject( + rpc_client.EngineClient, 'call', + side_effect=[ + tools.to_remote_error(AttributeError()), + tools.to_remote_error(unknown_parameter), + tools.to_remote_error(missing_parameter), + ]) + resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.create, req, tenant_id=self.tenant, @@ -1075,7 +1012,29 @@ self.assertEqual(400, resp.json['code']) self.assertEqual('UserParameterMissing', resp.json['error']['type']) - self.m.VerifyAll() + + mock_call.assert_called_with( + req.context, + ('create_stack', + {'stack_name': stack_name, + 'template': template, + 'params': {'parameters': parameters, + 'encrypted_param_names': [], + 'parameter_defaults': {}, + 'event_sinks': [], + 'resource_registry': {}}, + 'files': {}, + 'environment_files': None, + 'args': {'timeout_mins': 30}, + 'owner_id': None, + 'nested_depth': 0, + 'user_creds_id': None, + 'parent_resource_name': None, + 'stack_user_project_id': None, + 'template_id': None}), + version='1.29' + ) + self.assertEqual(3, mock_call.call_count) def test_create_err_existing(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) @@ -1090,8 +1049,18 @@ req = self._post('/stacks', json.dumps(body)) error = heat_exc.StackExists(stack_name='s') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) + + resp = tools.request_with_middleware(fault.FaultWrapper, + self.controller.create, + req, tenant_id=self.tenant, + body=body) + + self.assertEqual(409, resp.json['code']) + self.assertEqual('StackExists', resp.json['error']['type']) + + mock_call.assert_called_once_with( req.context, ('create_stack', {'stack_name': stack_name, @@ -1111,17 +1080,7 @@ 'stack_user_project_id': None, 'template_id': None}), version='1.29' - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() - - resp = tools.request_with_middleware(fault.FaultWrapper, - self.controller.create, - req, tenant_id=self.tenant, - body=body) - - self.assertEqual(409, resp.json['code']) - self.assertEqual('StackExists', resp.json['error']['type']) - self.m.VerifyAll() + ) def test_create_timeout_not_int(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', True) @@ -1142,7 +1101,7 @@ self.assertEqual("Only integer is acceptable by 'timeout_mins'.", six.text_type(ex)) - self.assertFalse(mock_call.called) + mock_call.assert_not_called() def test_create_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'create', False) @@ -1177,8 +1136,17 @@ req = self._post('/stacks', json.dumps(body)) error = heat_exc.StackValidationFailed(message='') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) + + resp = tools.request_with_middleware(fault.FaultWrapper, + self.controller.create, + req, tenant_id=self.tenant, + body=body) + self.assertEqual(400, resp.json['code']) + self.assertEqual('StackValidationFailed', resp.json['error']['type']) + + mock_call.assert_called_once_with( req.context, ('create_stack', {'stack_name': stack_name, @@ -1198,16 +1166,7 @@ 'stack_user_project_id': None, 'template_id': None}), version='1.29' - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() - - resp = tools.request_with_middleware(fault.FaultWrapper, - self.controller.create, - req, tenant_id=self.tenant, - body=body) - self.assertEqual(400, resp.json['code']) - self.assertEqual('StackValidationFailed', resp.json['error']['type']) - self.m.VerifyAll() + ) def test_create_err_stack_bad_reqest(self, mock_enforce): cfg.CONF.set_override('debug', True) @@ -1300,8 +1259,16 @@ 'added': [], 'replaced': []} - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=resource_changes) + + result = self.controller.preview_update(req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + self.assertEqual({'resource_changes': resource_changes}, result) + + mock_call.assert_called_once_with( req.context, ('preview_update_stack', {'stack_identity': dict(identity), @@ -1315,15 +1282,7 @@ 'environment_files': None, 'args': {'timeout_mins': 30}}), version='1.23' - ).AndReturn(resource_changes) - self.m.ReplayAll() - - result = self.controller.preview_update(req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - self.assertEqual({'resource_changes': resource_changes}, result) - self.m.VerifyAll() + ) def test_preview_update_stack_patch(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'preview_update_patch', True) @@ -1342,8 +1301,15 @@ 'added': [], 'replaced': []} - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=resource_changes) + + result = self.controller.preview_update_patch( + req, tenant_id=identity.tenant, stack_name=identity.stack_name, + stack_id=identity.stack_id, body=body) + self.assertEqual({'resource_changes': resource_changes}, result) + + mock_call.assert_called_once_with( req.context, ('preview_update_stack', {'stack_identity': dict(identity), @@ -1358,14 +1324,7 @@ 'args': {rpc_api.PARAM_EXISTING: True, 'timeout_mins': 30}}), version='1.23' - ).AndReturn(resource_changes) - self.m.ReplayAll() - - result = self.controller.preview_update_patch( - req, tenant_id=identity.tenant, stack_name=identity.stack_name, - stack_id=identity.stack_id, body=body) - self.assertEqual({'resource_changes': resource_changes}, result) - self.m.VerifyAll() + ) @mock.patch.object(rpc_client.EngineClient, 'call') def test_update_immutable_parameter(self, mock_call, mock_enforce): @@ -1382,8 +1341,23 @@ identity, json.dumps(body)) error = heat_exc.ImmutableParameterModified(keys='param1') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) + + resp = tools.request_with_middleware(fault.FaultWrapper, + self.controller.update, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + self.assertEqual(400, resp.json['code']) + self.assertEqual('ImmutableParameterModified', + resp.json['error']['type']) + self.assertIn("The following parameters are immutable", + six.text_type(resp.json['error']['message'])) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -1398,22 +1372,7 @@ 'args': {'timeout_mins': 30}, 'template_id': None}), version='1.29' - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() - - resp = tools.request_with_middleware(fault.FaultWrapper, - self.controller.update, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - - self.assertEqual(400, resp.json['code']) - self.assertEqual('ImmutableParameterModified', - resp.json['error']['type']) - self.assertIn("The following parameters are immutable", - six.text_type(resp.json['error']['message'])) - self.m.VerifyAll() + ) def test_lookup(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', True) @@ -1421,20 +1380,18 @@ req = self._get('/stacks/%(stack_name)s' % identity) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('identify_stack', {'stack_name': identity.stack_name}) - ).AndReturn(identity) - - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) found = self.assertRaises( webob.exc.HTTPFound, self.controller.lookup, req, tenant_id=identity.tenant, stack_name=identity.stack_name) self.assertEqual(self._url(identity), found.location) - self.m.VerifyAll() + mock_call.assert_called_once_with( + req.context, + ('identify_stack', {'stack_name': identity.stack_name}) + ) def test_lookup_arn(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', True) @@ -1442,15 +1399,11 @@ req = self._get('/stacks%s' % identity.arn_url_path()) - self.m.ReplayAll() - found = self.assertRaises( webob.exc.HTTPFound, self.controller.lookup, req, tenant_id=identity.tenant, stack_name=identity.arn()) self.assertEqual(self._url(identity), found.location) - self.m.VerifyAll() - def test_lookup_nonexistent(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', True) stack_name = 'wibble' @@ -1459,12 +1412,8 @@ 'stack_name': stack_name}) error = heat_exc.EntityNotFound(entity='Stack', name='a') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('identify_stack', {'stack_name': stack_name}) - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.lookup, @@ -1473,7 +1422,11 @@ self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('identify_stack', {'stack_name': stack_name}) + ) def test_lookup_err_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', False) @@ -1496,13 +1449,8 @@ req = self._get('/stacks/%(stack_name)s/resources' % identity) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('identify_stack', {'stack_name': identity.stack_name}) - ).AndReturn(identity) - - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) found = self.assertRaises( webob.exc.HTTPFound, self.controller.lookup, req, @@ -1511,7 +1459,10 @@ self.assertEqual(self._url(identity) + '/resources', found.location) - self.m.VerifyAll() + mock_call.assert_called_once_with( + req.context, + ('identify_stack', {'stack_name': identity.stack_name}) + ) def test_lookup_resource_nonexistent(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', True) @@ -1521,12 +1472,8 @@ 'stack_name': stack_name}) error = heat_exc.EntityNotFound(entity='Stack', name='a') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('identify_stack', {'stack_name': stack_name}) - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.lookup, @@ -1536,7 +1483,11 @@ self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('identify_stack', {'stack_name': stack_name}) + ) def test_lookup_resource_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'lookup', False) @@ -1588,14 +1539,9 @@ u'capabilities': [], } ] - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('show_stack', {'stack_identity': dict(identity), - 'resolve_outputs': True}), - version='1.20' - ).AndReturn(engine_resp) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=engine_resp) + response = self.controller.show(req, tenant_id=identity.tenant, stack_name=identity.stack_name, @@ -1621,7 +1567,13 @@ } } self.assertEqual(expected, response) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('show_stack', {'stack_identity': dict(identity), + 'resolve_outputs': True}), + version='1.20' + ) def test_show_without_resolve_outputs(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) @@ -1653,14 +1605,9 @@ u'capabilities': [], } ] - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('show_stack', {'stack_identity': dict(identity), - 'resolve_outputs': False}), - version='1.20' - ).AndReturn(engine_resp) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=engine_resp) + response = self.controller.show(req, tenant_id=identity.tenant, stack_name=identity.stack_name, @@ -1685,7 +1632,13 @@ } } self.assertEqual(expected, response) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('show_stack', {'stack_identity': dict(identity), + 'resolve_outputs': False}), + version='1.20' + ) def test_show_notfound(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', True) @@ -1693,14 +1646,8 @@ req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) error = heat_exc.EntityNotFound(entity='Stack', name='a') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('show_stack', {'stack_identity': dict(identity), - 'resolve_outputs': True}), - version='1.20' - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.show, @@ -1710,15 +1657,19 @@ self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('show_stack', {'stack_identity': dict(identity), + 'resolve_outputs': True}), + version='1.20' + ) def test_show_invalidtenant(self, mock_enforce): identity = identifier.HeatIdentifier('wibble', 'wordpress', '6') req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) - self.m.ReplayAll() - resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.show, req, tenant_id=identity.tenant, @@ -1727,7 +1678,6 @@ self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) - self.m.VerifyAll() def test_show_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show', False) @@ -1750,19 +1700,19 @@ req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) template = {u'Foo': u'bar'} - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('get_template', {'stack_identity': dict(identity)}) - ).AndReturn(template) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=template) response = self.controller.template(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(template, response) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('get_template', {'stack_identity': dict(identity)}) + ) def test_get_environment(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'environment', True) @@ -1770,20 +1720,20 @@ req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) env = {'parameters': {'Foo': 'bar'}} - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('get_environment', {'stack_identity': dict(identity)},), - version='1.28', - ).AndReturn(env) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=env) response = self.controller.environment(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(env, response) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('get_environment', {'stack_identity': dict(identity)},), + version='1.28', + ) def test_get_files(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'files', True) @@ -1791,20 +1741,20 @@ req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) files = {'foo.yaml': 'i am yaml'} - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('get_files', {'stack_identity': dict(identity)},), - version='1.32', - ).AndReturn(files) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=files) response = self.controller.files(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(files, response) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('get_files', {'stack_identity': dict(identity)},), + version='1.32', + ) def test_get_template_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'template', False) @@ -1812,7 +1762,6 @@ req = self._get('/stacks/%(stack_name)s/%(stack_id)s/template' % identity) - self.m.ReplayAll() resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.template, req, tenant_id=identity.tenant, @@ -1821,7 +1770,6 @@ self.assertEqual(403, resp.status_int) self.assertIn('403 Forbidden', six.text_type(resp)) - self.m.VerifyAll() def test_get_template_err_notfound(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'template', True) @@ -1829,13 +1777,8 @@ req = self._get('/stacks/%(stack_name)s/%(stack_id)s' % identity) error = heat_exc.EntityNotFound(entity='Stack', name='a') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('get_template', {'stack_identity': dict(identity)}) - ).AndRaise(tools.to_remote_error(error)) - - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.template, @@ -1845,7 +1788,11 @@ self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('get_template', {'stack_identity': dict(identity)}) + ) def test_update(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', True) @@ -1860,8 +1807,17 @@ req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + self.assertRaises(webob.exc.HTTPAccepted, + self.controller.update, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -1876,16 +1832,7 @@ 'args': {'timeout_mins': 30}, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - self.assertRaises(webob.exc.HTTPAccepted, - self.controller.update, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - self.m.VerifyAll() + ) def test_update_with_tags(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', True) @@ -1901,8 +1848,17 @@ req = self._put('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + self.assertRaises(webob.exc.HTTPAccepted, + self.controller.update, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -1917,16 +1873,7 @@ 'args': {'timeout_mins': 30, 'tags': ['tag1', 'tag2']}, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - self.assertRaises(webob.exc.HTTPAccepted, - self.controller.update, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - self.m.VerifyAll() + ) def test_update_bad_name(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', True) @@ -1942,8 +1889,20 @@ json.dumps(body)) error = heat_exc.EntityNotFound(entity='Stack', name='a') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) + + resp = tools.request_with_middleware(fault.FaultWrapper, + self.controller.update, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + self.assertEqual(404, resp.json['code']) + self.assertEqual('EntityNotFound', resp.json['error']['type']) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -1958,19 +1917,7 @@ 'args': {'timeout_mins': 30}, 'template_id': None}), version='1.29' - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() - - resp = tools.request_with_middleware(fault.FaultWrapper, - self.controller.update, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - - self.assertEqual(404, resp.json['code']) - self.assertEqual('EntityNotFound', resp.json['error']['type']) - self.m.VerifyAll() + ) def test_update_timeout_not_int(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update', True) @@ -2030,8 +1977,17 @@ req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + self.assertRaises(webob.exc.HTTPAccepted, + self.controller.update_patch, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -2047,16 +2003,7 @@ 'timeout_mins': 30}, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - self.assertRaises(webob.exc.HTTPAccepted, - self.controller.update_patch, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - self.m.VerifyAll() + ) def test_update_with_existing_parameters(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) @@ -2070,8 +2017,17 @@ req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + self.assertRaises(webob.exc.HTTPAccepted, + self.controller.update_patch, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -2087,16 +2043,7 @@ 'timeout_mins': 30}, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - self.assertRaises(webob.exc.HTTPAccepted, - self.controller.update_patch, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - self.m.VerifyAll() + ) def test_update_with_existing_parameters_with_tags(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) @@ -2111,8 +2058,17 @@ req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + self.assertRaises(webob.exc.HTTPAccepted, + self.controller.update_patch, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -2129,16 +2085,7 @@ 'tags': ['tag1', 'tag2']}, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - self.assertRaises(webob.exc.HTTPAccepted, - self.controller.update_patch, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - self.m.VerifyAll() + ) def test_update_with_patched_existing_parameters(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) @@ -2153,8 +2100,17 @@ req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + self.assertRaises(webob.exc.HTTPAccepted, + self.controller.update_patch, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -2170,16 +2126,7 @@ 'timeout_mins': 30}, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - self.assertRaises(webob.exc.HTTPAccepted, - self.controller.update_patch, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - self.m.VerifyAll() + ) def test_update_with_patch_timeout_not_int(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'update_patch', True) @@ -2220,8 +2167,17 @@ req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + self.assertRaises(webob.exc.HTTPAccepted, + self.controller.update_patch, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -2238,16 +2194,7 @@ 'timeout_mins': 30}, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - self.assertRaises(webob.exc.HTTPAccepted, - self.controller.update_patch, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - self.m.VerifyAll() + ) def test_update_with_patched_and_default_parameters( self, mock_enforce): @@ -2265,8 +2212,17 @@ req = self._patch('/stacks/%(stack_name)s/%(stack_id)s' % identity, json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=dict(identity)) + + self.assertRaises(webob.exc.HTTPAccepted, + self.controller.update_patch, + req, tenant_id=identity.tenant, + stack_name=identity.stack_name, + stack_id=identity.stack_id, + body=body) + + mock_call.assert_called_once_with( req.context, ('update_stack', {'stack_identity': dict(identity), @@ -2283,16 +2239,7 @@ 'timeout_mins': 30}, 'template_id': None}), version='1.29' - ).AndReturn(dict(identity)) - self.m.ReplayAll() - - self.assertRaises(webob.exc.HTTPAccepted, - self.controller.update_patch, - req, tenant_id=identity.tenant, - stack_name=identity.stack_name, - stack_id=identity.stack_id, - body=body) - self.m.VerifyAll() + ) def test_delete(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete', True) @@ -2300,20 +2247,20 @@ req = self._delete('/stacks/%(stack_name)s/%(stack_id)s' % identity) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') # Engine returns None when delete successful - rpc_client.EngineClient.call( - req.context, - ('delete_stack', {'stack_identity': dict(identity)}) - ).AndReturn(None) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=None) self.assertRaises(webob.exc.HTTPNoContent, self.controller.delete, req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('delete_stack', {'stack_identity': dict(identity)}) + ) def test_delete_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'delete', False) @@ -2336,43 +2283,43 @@ req = self._get('/stacks/%(stack_name)s/%(stack_id)s/export' % identity) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') # Engine returns json data expected = {"name": "test", "id": "123"} - rpc_client.EngineClient.call( - req.context, - ('export_stack', {'stack_identity': dict(identity)}), - version='1.22' - ).AndReturn(expected) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=expected) ret = self.controller.export(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(expected, ret) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('export_stack', {'stack_identity': dict(identity)}), + version='1.22' + ) def test_abandon(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'abandon', True) identity = identifier.HeatIdentifier(self.tenant, 'wordpress', '6') req = self._abandon('/stacks/%(stack_name)s/%(stack_id)s' % identity) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') # Engine returns json data on abandon completion expected = {"name": "test", "id": "123"} - rpc_client.EngineClient.call( - req.context, - ('abandon_stack', {'stack_identity': dict(identity)}) - ).AndReturn(expected) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=expected) ret = self.controller.abandon(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual(expected, ret) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('abandon_stack', {'stack_identity': dict(identity)}) + ) def test_abandon_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'abandon', False) @@ -2396,13 +2343,9 @@ req = self._delete('/stacks/%(stack_name)s/%(stack_id)s' % identity) error = heat_exc.EntityNotFound(entity='Stack', name='a') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') # Engine returns None when delete successful - rpc_client.EngineClient.call( - req.context, - ('delete_stack', {'stack_identity': dict(identity)}) - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.delete, @@ -2412,7 +2355,11 @@ self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('delete_stack', {'stack_identity': dict(identity)}) + ) def test_validate_template(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'validate_template', True) @@ -2432,8 +2379,15 @@ ] } - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=engine_response) + + response = self.controller.validate_template(req, + tenant_id=self.tenant, + body=body) + self.assertEqual(engine_response, response) + + mock_call.assert_called_once_with( req.context, ('validate_template', {'template': template, @@ -2447,14 +2401,7 @@ 'show_nested': False, 'ignorable_errors': None}), version='1.24' - ).AndReturn(engine_response) - self.m.ReplayAll() - - response = self.controller.validate_template(req, - tenant_id=self.tenant, - body=body) - self.assertEqual(engine_response, response) - self.m.VerifyAll() + ) def test_validate_template_error(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'validate_template', True) @@ -2463,8 +2410,14 @@ req = self._post('/validate', json.dumps(body)) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value={'Error': 'fubar'}) + + self.assertRaises(webob.exc.HTTPBadRequest, + self.controller.validate_template, + req, tenant_id=self.tenant, body=body) + + mock_call.assert_called_once_with( req.context, ('validate_template', {'template': template, @@ -2478,13 +2431,7 @@ 'show_nested': False, 'ignorable_errors': None}), version='1.24' - ).AndReturn({'Error': 'fubar'}) - self.m.ReplayAll() - - self.assertRaises(webob.exc.HTTPBadRequest, - self.controller.validate_template, - req, tenant_id=self.tenant, body=body) - self.m.VerifyAll() + ) def test_validate_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'validate_template', False) @@ -2509,8 +2456,14 @@ 'AWS::EC2::EIP', 'AWS::EC2::EIPAssociation'] - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=engine_response) + + response = self.controller.list_resource_types(req, + tenant_id=self.tenant) + self.assertEqual({'resource_types': engine_response}, response) + + mock_call.assert_called_once_with( req.context, ('list_resource_types', { @@ -2520,20 +2473,25 @@ 'with_description': False }), version="1.30" - ).AndReturn(engine_response) - self.m.ReplayAll() - response = self.controller.list_resource_types(req, - tenant_id=self.tenant) - self.assertEqual({'resource_types': engine_response}, response) - self.m.VerifyAll() + ) def test_list_resource_types_error(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'list_resource_types', True) req = self._get('/resource_types') error = heat_exc.EntityNotFound(entity='Resource Type', name='') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) + + resp = tools.request_with_middleware( + fault.FaultWrapper, + self.controller.list_resource_types, + req, tenant_id=self.tenant) + + self.assertEqual(404, resp.json['code']) + self.assertEqual('EntityNotFound', resp.json['error']['type']) + + mock_call.assert_called_once_with( req.context, ('list_resource_types', { @@ -2543,17 +2501,7 @@ 'with_description': False }), version="1.30" - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() - - resp = tools.request_with_middleware( - fault.FaultWrapper, - self.controller.list_resource_types, - req, tenant_id=self.tenant) - - self.assertEqual(404, resp.json['code']) - self.assertEqual('EntityNotFound', resp.json['error']['type']) - self.m.VerifyAll() + ) def test_list_resource_types_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'list_resource_types', False) @@ -2575,20 +2523,20 @@ {'output_key': 'key2', 'description': 'description1'} ] - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('list_outputs', {'stack_identity': dict(identity)}), - version='1.19' - ).AndReturn(outputs) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=outputs) response = self.controller.list_outputs(req, tenant_id=identity.tenant, stack_name=identity.stack_name, stack_id=identity.stack_id) self.assertEqual({'outputs': outputs}, response) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('list_outputs', {'stack_identity': dict(identity)}), + version='1.19' + ) def test_show_output(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'show_output', True) @@ -2598,14 +2546,8 @@ 'output_value': 'val', 'description': 'description'} - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('show_output', {'output_key': 'key', - 'stack_identity': dict(identity)}), - version='1.19' - ).AndReturn(output) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=output) response = self.controller.show_output(req, tenant_id=identity.tenant, stack_name=identity.stack_name, @@ -2613,7 +2555,12 @@ output_key='key') self.assertEqual({'output': output}, response) - self.m.VerifyAll() + mock_call.assert_called_once_with( + req.context, + ('show_output', {'output_key': 'key', + 'stack_identity': dict(identity)}), + version='1.19' + ) def test_list_template_versions(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'list_template_versions', True) @@ -2622,34 +2569,34 @@ engine_response = [ {'version': 'heat_template_version.2013-05-23', 'type': 'hot'}, {'version': 'AWSTemplateFormatVersion.2010-09-09', 'type': 'cfn'}] + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=engine_response) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, ('list_template_versions', {}), - version="1.11" - ).AndReturn(engine_response) - self.m.ReplayAll() response = self.controller.list_template_versions( req, tenant_id=self.tenant) self.assertEqual({'template_versions': engine_response}, response) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, ('list_template_versions', {}), + version="1.11" + ) def _test_list_template_functions(self, mock_enforce, req, engine_response, with_condition=False): self._mock_enforce_setup(mock_enforce, 'list_template_functions', True) + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=engine_response) + + response = self.controller.list_template_functions( + req, tenant_id=self.tenant, template_version='t1') + self.assertEqual({'template_functions': engine_response}, response) - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call.assert_called_once_with( req.context, ( 'list_template_functions', {'template_version': 't1', 'with_condition': with_condition}), version="1.35" - ).AndReturn(engine_response) - self.m.ReplayAll() - response = self.controller.list_template_functions( - req, tenant_id=self.tenant, template_version='t1') - self.assertEqual({'template_functions': engine_response}, response) - self.m.VerifyAll() + ) def test_list_template_functions(self, mock_enforce): req = self._get('/template_versions/t1/functions') @@ -2691,19 +2638,20 @@ 'message': None, }, } - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('resource_schema', {'type_name': type_name, - 'with_description': False}), - version='1.30' - ).AndReturn(engine_response) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=engine_response) + response = self.controller.resource_schema(req, tenant_id=self.tenant, type_name=type_name) self.assertEqual(engine_response, response) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('resource_schema', {'type_name': type_name, + 'with_description': False}), + version='1.30' + ) def test_resource_schema_nonexist(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'resource_schema', True) @@ -2712,14 +2660,8 @@ error = heat_exc.EntityNotFound(entity='Resource Type', name='BogusResourceType') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('resource_schema', {'type_name': type_name, - 'with_description': False}), - version='1.30' - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.resource_schema, @@ -2727,7 +2669,13 @@ type_name=type_name) self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('resource_schema', {'type_name': type_name, + 'with_description': False}), + version='1.30' + ) def test_resource_schema_faulty_template(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'resource_schema', True) @@ -2735,14 +2683,8 @@ type_name = 'FaultyTemplate' error = heat_exc.InvalidGlobalResource(type_name='FaultyTemplate') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('resource_schema', {'type_name': type_name, - 'with_description': False}), - version='1.30' - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.resource_schema, @@ -2750,7 +2692,13 @@ type_name=type_name) self.assertEqual(500, resp.json['code']) self.assertEqual('InvalidGlobalResource', resp.json['error']['type']) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('resource_schema', {'type_name': type_name, + 'with_description': False}), + version='1.30' + ) def test_resource_schema_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'resource_schema', False) @@ -2769,18 +2717,18 @@ req = self._get('/resource_types/TEST_TYPE/template') engine_response = {'Type': 'TEST_TYPE'} + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + return_value=engine_response) + + self.controller.generate_template(req, tenant_id=self.tenant, + type_name='TEST_TYPE') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( + mock_call.assert_called_once_with( req.context, ('generate_template', {'type_name': 'TEST_TYPE', 'template_type': 'cfn'}), version='1.9' - ).AndReturn(engine_response) - self.m.ReplayAll() - self.controller.generate_template(req, tenant_id=self.tenant, - type_name='TEST_TYPE') - self.m.VerifyAll() + ) def test_generate_template_invalid_template_type(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'generate_template', True) @@ -2804,21 +2752,22 @@ req = self._get('/resource_types/NOT_FOUND/template') error = heat_exc.EntityNotFound(entity='Resource Type', name='a') - self.m.StubOutWithMock(rpc_client.EngineClient, 'call') - rpc_client.EngineClient.call( - req.context, - ('generate_template', {'type_name': 'NOT_FOUND', - 'template_type': 'cfn'}), - version='1.9' - ).AndRaise(tools.to_remote_error(error)) - self.m.ReplayAll() + mock_call = self.patchobject(rpc_client.EngineClient, 'call', + side_effect=tools.to_remote_error(error)) + resp = tools.request_with_middleware(fault.FaultWrapper, self.controller.generate_template, req, tenant_id=self.tenant, type_name='NOT_FOUND') self.assertEqual(404, resp.json['code']) self.assertEqual('EntityNotFound', resp.json['error']['type']) - self.m.VerifyAll() + + mock_call.assert_called_once_with( + req.context, + ('generate_template', {'type_name': 'NOT_FOUND', + 'template_type': 'cfn'}), + version='1.9' + ) def test_generate_template_err_denied_policy(self, mock_enforce): self._mock_enforce_setup(mock_enforce, 'generate_template', False) diff -Nru heat-11.0.0~b1/heat/tests/aws/test_instance_network.py heat-11.0.0~b2/heat/tests/aws/test_instance_network.py --- heat-11.0.0~b1/heat/tests/aws/test_instance_network.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/aws/test_instance_network.py 2018-06-07 22:12:28.000000000 +0000 @@ -155,10 +155,9 @@ self.fc = fakes_nova.FakeClient() def _mock_get_image_id_success(self, imageId_input, imageId): - self.m.StubOutWithMock(glance.GlanceClientPlugin, - 'find_image_by_name_or_id') - glance.GlanceClientPlugin.find_image_by_name_or_id( - imageId_input).MultipleTimes().AndReturn(imageId) + self.m_f_i = self.patchobject(glance.GlanceClientPlugin, + 'find_image_by_name_or_id', + return_value=imageId) def _test_instance_create_delete(self, vm_status='ACTIVE', vm_delete_status='NotFound'): @@ -179,22 +178,20 @@ d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]} d1['server']['status'] = vm_status - self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') - get = self.fc.client.get_servers_1234 - get().AndReturn((200, d1)) + m_gs_side_effects = [(200, d1)] d2 = copy.deepcopy(d1) if vm_delete_status == 'DELETED': d2['server']['status'] = vm_delete_status - get().AndReturn((200, d2)) + m_gs_side_effects.append((200, d2)) else: - get().AndRaise(fakes_nova.fake_exception()) - - self.m.ReplayAll() + m_gs_side_effects.append(fakes_nova.fake_exception) + self.patchobject(self.fc.client, 'get_servers_1234', + side_effect=m_gs_side_effects) scheduler.TaskRunner(instance.delete)() self.assertEqual((instance.DELETE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + self.assertEqual(2, self.fc.client.get_servers_1234.call_count) def _create_test_instance(self, return_server, name): stack_name = '%s_s' % name @@ -213,42 +210,43 @@ resource_defns['WebServer'], self.stack) metadata = instance.metadata_get() - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().AndReturn(self.fc) + self.patchobject(nova.NovaClientPlugin, '_create', + return_value=self.fc) self._mock_get_image_id_success(image_id, 1) self.stub_SubnetConstraint_validate() - self.m.StubOutWithMock(instance, 'neutron') - instance.neutron().MultipleTimes().AndReturn(FakeNeutron()) + self.patchobject(instance, 'neutron', return_value=FakeNeutron()) - self.m.StubOutWithMock(neutron.NeutronClientPlugin, '_create') - neutron.NeutronClientPlugin._create().MultipleTimes().AndReturn( - FakeNeutron()) + self.patchobject(neutron.NeutronClientPlugin, '_create', + return_value=FakeNeutron()) # need to resolve the template functions server_userdata = instance.client_plugin().build_userdata( metadata, instance.properties['UserData'], 'ec2-user') - self.m.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') - nova.NovaClientPlugin.build_userdata( - metadata, - instance.properties['UserData'], - 'ec2-user').AndReturn(server_userdata) + self.patchobject(nova.NovaClientPlugin, 'build_userdata', + return_value=server_userdata) + self.patchobject(self.fc.servers, 'create', return_value=return_server) - self.m.StubOutWithMock(self.fc.servers, 'create') - self.fc.servers.create( + scheduler.TaskRunner(instance.create)() + self.m_f_i.assert_called_with(image_id) + self.fc.servers.create.assert_called_once_with( image=1, flavor=3, key_name='test', name=utils.PhysName(stack_name, instance.name), security_groups=None, userdata=server_userdata, scheduler_hints=None, meta=None, nics=[{'port-id': '64d913c1-bcb1-42d2-8f0a-9593dbcaf251'}], availability_zone=None, - block_device_mapping=None).AndReturn( - return_server) - self.m.ReplayAll() - - scheduler.TaskRunner(instance.create)() + block_device_mapping=None) + nova.NovaClientPlugin.build_userdata.assert_called_once_with( + metadata, + instance.properties['UserData'], + 'ec2-user') + neutron.NeutronClientPlugin._create.assert_called_once_with() + nova.NovaClientPlugin._create.assert_called_once_with() + glance.GlanceClientPlugin.find_image_by_name_or_id.assert_called_with( + image_id) return instance def _create_test_instance_with_nic(self, return_server, name): @@ -275,44 +273,46 @@ self._mock_get_image_id_success(image_id, 1) self.stub_SubnetConstraint_validate() - self.m.StubOutWithMock(nic, 'client') - nic.client().AndReturn(FakeNeutron()) + self.patchobject(nic, 'client', return_value=FakeNeutron()) - self.m.StubOutWithMock(neutron.NeutronClientPlugin, '_create') - neutron.NeutronClientPlugin._create().MultipleTimes().AndReturn( - FakeNeutron()) + self.patchobject(neutron.NeutronClientPlugin, '_create', + return_value=FakeNeutron()) - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().AndReturn(self.fc) + self.patchobject(nova.NovaClientPlugin, '_create', + return_value=self.fc) # need to resolve the template functions server_userdata = instance.client_plugin().build_userdata( metadata, instance.properties['UserData'], 'ec2-user') - self.m.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') - nova.NovaClientPlugin.build_userdata( - metadata, - instance.properties['UserData'], - 'ec2-user').AndReturn(server_userdata) + self.patchobject(nova.NovaClientPlugin, 'build_userdata', + return_value=server_userdata) + self.patchobject(self.fc.servers, 'create', return_value=return_server) - self.m.StubOutWithMock(self.fc.servers, 'create') - self.fc.servers.create( + # create network interface + scheduler.TaskRunner(nic.create)() + self.stack.resources["nic1"] = nic + + scheduler.TaskRunner(instance.create)() + + self.fc.servers.create.assert_called_once_with( image=1, flavor=3, key_name='test', name=utils.PhysName(stack_name, instance.name), security_groups=None, userdata=server_userdata, scheduler_hints=None, meta=None, nics=[{'port-id': '64d913c1-bcb1-42d2-8f0a-9593dbcaf251'}], availability_zone=None, - block_device_mapping=None).AndReturn( - return_server) - self.m.ReplayAll() - - # create network interface - scheduler.TaskRunner(nic.create)() - self.stack.resources["nic1"] = nic - - scheduler.TaskRunner(instance.create)() + block_device_mapping=None) + self.m_f_i.assert_called_with(image_id) + nova.NovaClientPlugin.build_userdata.assert_called_once_with( + metadata, + instance.properties['UserData'], + 'ec2-user') + neutron.NeutronClientPlugin._create.assert_called_once_with() + nova.NovaClientPlugin._create.assert_called_once_with() + glance.GlanceClientPlugin.find_image_by_name_or_id.assert_called_with( + image_id) return instance def test_instance_create_delete_with_SubnetId(self): @@ -331,5 +331,3 @@ self.assertEqual(expected_ip, instance.FnGetAtt('PrivateIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateDnsName')) self.assertEqual(expected_ip, instance.FnGetAtt('PublicDnsName')) - - self.m.VerifyAll() diff -Nru heat-11.0.0~b1/heat/tests/aws/test_instance.py heat-11.0.0~b2/heat/tests/aws/test_instance.py --- heat-11.0.0~b1/heat/tests/aws/test_instance.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/aws/test_instance.py 2018-06-07 22:12:28.000000000 +0000 @@ -15,7 +15,6 @@ import uuid import mock -import mox from neutronclient.v2_0 import client as neutronclient import six @@ -96,16 +95,14 @@ return (tmpl, stack) def _mock_get_image_id_success(self, imageId_input, imageId): - self.m.StubOutWithMock(glance.GlanceClientPlugin, - 'find_image_by_name_or_id') - glance.GlanceClientPlugin.find_image_by_name_or_id( - imageId_input).MultipleTimes().AndReturn(imageId) + self.patchobject(glance.GlanceClientPlugin, + 'find_image_by_name_or_id', + return_value=imageId) def _mock_get_image_id_fail(self, image_id, exp): - self.m.StubOutWithMock(glance.GlanceClientPlugin, - 'find_image_by_name_or_id') - glance.GlanceClientPlugin.find_image_by_name_or_id( - image_id).AndRaise(exp) + self.patchobject(glance.GlanceClientPlugin, + 'find_image_by_name_or_id', + side_effect=exp) def _get_test_template(self, stack_name, image_id=None, volumes=False): (tmpl, stack) = self._setup_test_stack(stack_name) @@ -120,7 +117,6 @@ return tmpl, stack def _setup_test_instance(self, return_server, name, image_id=None, - stub_create=True, stub_complete=False, volumes=False): stack_name = '%s_s' % name tmpl, self.stack = self._get_test_template(stack_name, image_id, @@ -129,42 +125,39 @@ resource_defns = tmpl.resource_definitions(self.stack) instance = instances.Instance(name, resource_defns['WebServer'], self.stack) - bdm = {"vdb": "9ef5496e-7426-446a-bbc8-01f84d9c9972:snap::True"} self._mock_get_image_id_success(image_id or 'CentOS 5.2', 1) - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().AndReturn(self.fc) + self.patchobject(nova.NovaClientPlugin, 'client', return_value=self.fc) self.stub_SnapshotConstraint_validate() - if stub_create: - self.m.StubOutWithMock(self.fc.servers, 'create') - self.fc.servers.create( - image=1, flavor=1, key_name='test', - name=utils.PhysName( - stack_name, - instance.name, - limit=instance.physical_resource_name_limit), - security_groups=None, - userdata=mox.IgnoreArg(), - scheduler_hints={'foo': ['spam', 'ham', 'baz'], 'bar': 'eggs'}, - meta=None, nics=None, availability_zone=None, - block_device_mapping=bdm).AndReturn( - return_server) - if stub_complete: - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get(return_server.id - ).MultipleTimes().AndReturn(return_server) + self.mock_create = mock.Mock(return_value=return_server) + self.fc.servers.create = self.mock_create + return instance - def _create_test_instance(self, return_server, name, - stub_create=True): - instance = self._setup_test_instance(return_server, name, - stub_create=stub_create, - stub_complete=True) - self.m.ReplayAll() - scheduler.TaskRunner(instance.create)() - self.m.UnsetStubs() + def _create_test_instance(self, return_server, name): + instance = self._setup_test_instance(return_server, name) + bdm = {"vdb": "9ef5496e-7426-446a-bbc8-01f84d9c9972:snap::True"} + + mock_get = mock.Mock(return_value=return_server) + with mock.patch.object(self.fc.servers, 'get', mock_get): + scheduler.TaskRunner(instance.create)() + + self.mock_create.assert_called_once_with( + image=1, flavor=1, key_name='test', + name=utils.PhysName( + self.stack.name, + instance.name, + limit=instance.physical_resource_name_limit), + security_groups=None, + userdata=mock.ANY, + scheduler_hints={'foo': ['spam', 'ham', 'baz'], + 'bar': 'eggs'}, + meta=None, nics=None, availability_zone=None, + block_device_mapping=bdm) + mock_get.assert_called_with(return_server.id) + return instance def _stub_glance_for_update(self, image_id=None): @@ -180,18 +173,12 @@ expected_ip = return_server.networks['public'][0] expected_az = getattr(return_server, 'OS-EXT-AZ:availability_zone') - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get(instance.resource_id).MultipleTimes( - ).AndReturn(return_server) - self.m.ReplayAll() self.assertEqual(expected_ip, instance.FnGetAtt('PublicIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateIp')) self.assertEqual(expected_ip, instance.FnGetAtt('PublicDnsName')) self.assertEqual(expected_ip, instance.FnGetAtt('PrivateDnsName')) self.assertEqual(expected_az, instance.FnGetAtt('AvailabilityZone')) - self.m.VerifyAll() - def test_instance_create_with_BlockDeviceMappings(self): return_server = self.fc.servers.list()[4] instance = self._create_test_instance(return_server, @@ -208,8 +195,6 @@ self.assertEqual(expected_ip, instance.FnGetAtt('PrivateDnsName')) self.assertEqual(expected_az, instance.FnGetAtt('AvailabilityZone')) - self.m.VerifyAll() - def test_build_block_device_mapping(self): return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, @@ -250,10 +235,10 @@ self.stub_SnapshotConstraint_validate() self.stub_FlavorConstraint_validate() self.stub_KeypairConstraint_validate() - self.m.StubOutWithMock(cinder.CinderClientPlugin, 'get_volume') ex = exception.EntityNotFound(entity='Volume', name='1234') - cinder.CinderClientPlugin.get_volume('1234').AndRaise(ex) - self.m.ReplayAll() + mock_get_vol = self.patchobject(cinder.CinderClientPlugin, + 'get_volume', + side_effect=ex) exc = self.assertRaises(exception.StackValidationFailed, instance.validate) @@ -262,7 +247,7 @@ "(1234) could not be found.", six.text_type(exc)) - self.m.VerifyAll() + mock_get_vol.assert_called_once_with('1234') def test_validate_BlockDeviceMappings_VolumeSize_valid_str(self): stack_name = 'val_VolumeSize_valid' @@ -279,15 +264,10 @@ self._mock_get_image_id_success('F17-x86_64-gold', 1) self.stub_SnapshotConstraint_validate() self.stub_VolumeConstraint_validate() - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().MultipleTimes().AndReturn(self.fc) - - self.m.ReplayAll() + self.patchobject(nova.NovaClientPlugin, 'client', return_value=self.fc) self.assertIsNone(instance.validate()) - self.m.VerifyAll() - def test_validate_BlockDeviceMappings_without_Ebs_property(self): stack_name = 'without_Ebs' tmpl, stack = self._setup_test_stack(stack_name) @@ -300,18 +280,13 @@ resource_defns['WebServer'], stack) self._mock_get_image_id_success('F17-x86_64-gold', 1) - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().MultipleTimes().AndReturn(self.fc) - - self.m.ReplayAll() + self.patchobject(nova.NovaClientPlugin, 'client', return_value=self.fc) exc = self.assertRaises(exception.StackValidationFailed, instance.validate) self.assertIn("Ebs is missing, this is required", six.text_type(exc)) - self.m.VerifyAll() - def test_validate_BlockDeviceMappings_without_SnapshotId_property(self): stack_name = 'without_SnapshotId' tmpl, stack = self._setup_test_stack(stack_name) @@ -325,18 +300,13 @@ resource_defns['WebServer'], stack) self._mock_get_image_id_success('F17-x86_64-gold', 1) - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().MultipleTimes().AndReturn(self.fc) - - self.m.ReplayAll() + self.patchobject(nova.NovaClientPlugin, 'client', return_value=self.fc) exc = self.assertRaises(exception.StackValidationFailed, instance.validate) self.assertIn("SnapshotId is missing, this is required", six.text_type(exc)) - self.m.VerifyAll() - def test_validate_BlockDeviceMappings_without_DeviceName_property(self): stack_name = 'without_DeviceName' tmpl, stack = self._setup_test_stack(stack_name) @@ -353,7 +323,6 @@ self.stub_ImageConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_FlavorConstraint_validate() - self.m.ReplayAll() exc = self.assertRaises(exception.StackValidationFailed, instance.validate) @@ -363,15 +332,12 @@ 'Property DeviceName not assigned') self.assertIn(excepted_error, six.text_type(exc)) - self.m.VerifyAll() - def test_instance_create_with_image_id(self): return_server = self.fc.servers.list()[1] instance = self._setup_test_instance(return_server, 'in_create_imgid', image_id='1') - self.m.ReplayAll() scheduler.TaskRunner(instance.create)() # this makes sure the auto increment worked on instance creation @@ -386,19 +352,15 @@ self.assertEqual(expected_ip, instance.FnGetAtt('PrivateDnsName')) self.assertEqual(expected_az, instance.FnGetAtt('AvailabilityZone')) - self.m.VerifyAll() - def test_instance_create_resolve_az_attribute(self): return_server = self.fc.servers.list()[1] instance = self._setup_test_instance(return_server, 'create_resolve_az_attribute') - self.m.ReplayAll() scheduler.TaskRunner(instance.create)() expected_az = getattr(return_server, 'OS-EXT-AZ:availability_zone') actual_az = instance._availability_zone() self.assertEqual(expected_az, actual_az) - self.m.VerifyAll() def test_instance_create_resolve_az_attribute_nova_az_ext_disabled(self): return_server = self.fc.servers.list()[1] @@ -407,11 +369,9 @@ 'create_resolve_az_attribute') self.patchobject(self.fc.servers, 'get', return_value=return_server) - self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.assertIsNone(instance._availability_zone()) - self.m.VerifyAll() def test_instance_create_image_name_err(self): stack_name = 'test_instance_create_image_name_err_stack' @@ -430,7 +390,6 @@ self.stub_FlavorConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_SnapshotConstraint_validate() - self.m.ReplayAll() create = scheduler.TaskRunner(instance.create) error = self.assertRaises(exception.ResourceFailure, create) @@ -440,8 +399,6 @@ "Error validating value 'Slackware': No image matching Slackware.", six.text_type(error)) - self.m.VerifyAll() - def test_instance_create_duplicate_image_name_err(self): stack_name = 'test_instance_create_image_name_err_stack' (tmpl, stack) = self._setup_test_stack(stack_name) @@ -462,7 +419,6 @@ self.stub_SnapshotConstraint_validate() self.stub_VolumeConstraint_validate() self.stub_FlavorConstraint_validate() - self.m.ReplayAll() create = scheduler.TaskRunner(instance.create) error = self.assertRaises(exception.ResourceFailure, create) @@ -473,8 +429,6 @@ "found for CentOS 5.2.", six.text_type(error)) - self.m.VerifyAll() - def test_instance_create_image_id_err(self): stack_name = 'test_instance_create_image_id_err_stack' (tmpl, stack) = self._setup_test_stack(stack_name) @@ -493,7 +447,6 @@ self.stub_FlavorConstraint_validate() self.stub_KeypairConstraint_validate() self.stub_SnapshotConstraint_validate() - self.m.ReplayAll() create = scheduler.TaskRunner(instance.create) error = self.assertRaises(exception.ResourceFailure, create) @@ -503,8 +456,6 @@ "Error validating value '1': No image matching 1.", six.text_type(error)) - self.m.VerifyAll() - def test_handle_check(self): (tmpl, stack) = self._setup_test_stack('test_instance_check_active') res_definitions = tmpl.resource_definitions(stack) @@ -539,16 +490,16 @@ 'test_instance_create') creator = progress.ServerCreateProgress(instance.resource_id) - self.m.StubOutWithMock(self.fc.servers, 'get') + self.fc.servers.get = mock.Mock(return_value=return_server) return_server.status = 'BOGUS' - self.fc.servers.get(instance.resource_id).AndReturn(return_server) - self.m.ReplayAll() + e = self.assertRaises(exception.ResourceUnknownStatus, instance.check_create_complete, (creator, None)) self.assertEqual('Instance is not active - Unknown status BOGUS ' 'due to "Unknown"', six.text_type(e)) - self.m.VerifyAll() + + self.fc.servers.get.assert_called_once_with(instance.resource_id) def test_instance_create_error_status(self): # checking via check_create_complete only so not to mock @@ -563,9 +514,7 @@ 'code': 500, 'created': '2013-08-14T03:12:10Z' } - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get(instance.resource_id).AndReturn(return_server) - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(return_value=return_server) e = self.assertRaises(exception.ResourceInError, instance.check_create_complete, @@ -574,7 +523,7 @@ 'Went to status ERROR due to "Message: NoValidHost, Code: 500"', six.text_type(e)) - self.m.VerifyAll() + self.fc.servers.get.assert_called_once_with(instance.resource_id) def test_instance_create_error_no_fault(self): # checking via check_create_complete only so not to mock @@ -585,9 +534,7 @@ creator = progress.ServerCreateProgress(instance.resource_id) return_server.status = 'ERROR' - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get(instance.resource_id).AndReturn(return_server) - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(return_value=return_server) e = self.assertRaises( exception.ResourceInError, instance.check_create_complete, @@ -596,7 +543,7 @@ 'Went to status ERROR due to "Message: Unknown, Code: Unknown"', six.text_type(e)) - self.m.VerifyAll() + self.fc.servers.get.assert_called_once_with(instance.resource_id) def test_instance_create_with_stack_scheduler_hints(self): return_server = self.fc.servers.list()[1] @@ -615,20 +562,22 @@ stack.add_resource(instance) self.assertIsNotNone(instance.uuid) - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().AndReturn(self.fc) + self.patchobject(nova.NovaClientPlugin, 'client', return_value=self.fc) self.stub_SnapshotConstraint_validate() + self.fc.servers.create = mock.Mock(return_value=return_server) + + scheduler.TaskRunner(instance.create)() + self.assertGreater(instance.id, 0) - self.m.StubOutWithMock(self.fc.servers, 'create') shm = sh.SchedulerHintsMixin - self.fc.servers.create( + self.fc.servers.create.assert_called_once_with( image=1, flavor=1, key_name='test', name=utils.PhysName( stack_name, instance.name, limit=instance.physical_resource_name_limit), security_groups=None, - userdata=mox.IgnoreArg(), + userdata=mock.ANY, scheduler_hints={shm.HEAT_ROOT_STACK_ID: stack.root_stack_id(), shm.HEAT_STACK_ID: stack.id, shm.HEAT_STACK_NAME: stack.name, @@ -637,12 +586,7 @@ shm.HEAT_RESOURCE_UUID: instance.uuid, 'foo': ['spam', 'ham', 'baz'], 'bar': 'eggs'}, meta=None, nics=None, availability_zone=None, - block_device_mapping=bdm).AndReturn( - return_server) - self.m.ReplayAll() - scheduler.TaskRunner(instance.create)() - self.assertGreater(instance.id, 0) - self.m.VerifyAll() + block_device_mapping=bdm) def test_instance_validate(self): stack_name = 'test_instance_validate_stack' @@ -653,18 +597,14 @@ instance = instances.Instance('instance_create_image', resource_defns['WebServer'], stack) - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().AndReturn(self.fc) + self.patchobject(nova.NovaClientPlugin, 'client', return_value=self.fc) self._mock_get_image_id_success('1', 1) self.stub_VolumeConstraint_validate() self.stub_SnapshotConstraint_validate() - self.m.ReplayAll() self.assertIsNone(instance.validate()) - self.m.VerifyAll() - def _test_instance_create_delete(self, vm_status='ACTIVE', vm_delete_status='NotFound'): return_server = self.fc.servers.list()[1] @@ -679,22 +619,20 @@ d1 = {'server': self.fc.client.get_servers_detail()[1]['servers'][0]} d1['server']['status'] = vm_status - self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') - get = self.fc.client.get_servers_1234 - get().AndReturn((200, d1)) + mock_get = mock.Mock() + self.fc.client.get_servers_1234 = mock_get d2 = copy.deepcopy(d1) if vm_delete_status == 'DELETED': d2['server']['status'] = vm_delete_status - get().AndReturn((200, d2)) + mock_get.side_effect = [(200, d1), (200, d2)] else: - get().AndRaise(fakes_nova.fake_exception()) - - self.m.ReplayAll() + mock_get.side_effect = [(200, d1), fakes_nova.fake_exception()] scheduler.TaskRunner(instance.delete)() self.assertEqual((instance.DELETE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + + self.assertEqual(2, mock_get.call_count) def test_instance_create_delete_notfound(self): self._test_instance_create_delete() @@ -711,14 +649,12 @@ # this makes sure the auto increment worked on instance creation self.assertGreater(instance.id, 0) - self.m.StubOutWithMock(self.fc.client, 'delete_servers_1234') - self.fc.client.delete_servers_1234().AndRaise( - fakes_nova.fake_exception()) - self.m.ReplayAll() + self.fc.client.delete_servers_1234 = mock.Mock( + side_effect=fakes_nova.fake_exception()) scheduler.TaskRunner(instance.delete)() self.assertEqual((instance.DELETE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + self.fc.client.delete_servers_1234.assert_called_once() def test_instance_update_metadata(self): return_server = self.fc.servers.list()[1] @@ -755,38 +691,33 @@ self.patchobject(glance.GlanceClientPlugin, 'find_image_by_name_or_id', return_value=1) - self.m.StubOutWithMock(self.fc.servers, 'get') - - def status_resize(*args): - return_server.status = 'RESIZE' - - def status_verify_resize(*args): - return_server.status = 'VERIFY_RESIZE' - - def status_active(*args): - return_server.status = 'ACTIVE' - - self.fc.servers.get('1234').WithSideEffects( - status_active).AndReturn(return_server) - self.fc.servers.get('1234').WithSideEffects( - status_resize).AndReturn(return_server) - self.fc.servers.get('1234').WithSideEffects( - status_verify_resize).AndReturn(return_server) - self.fc.servers.get('1234').WithSideEffects( - status_verify_resize).AndReturn(return_server) - self.fc.servers.get('1234').WithSideEffects( - status_active).AndReturn(return_server) - - self.m.StubOutWithMock(self.fc.client, 'post_servers_1234_action') - self.fc.client.post_servers_1234_action( - body={'resize': {'flavorRef': 2}}).AndReturn((202, None)) - self.fc.client.post_servers_1234_action( - body={'confirmResize': None}).AndReturn((202, None)) - self.m.ReplayAll() + statuses = iter([ + 'ACTIVE', + 'RESIZE', + 'VERIFY_RESIZE', + 'VERIFY_RESIZE', + 'ACTIVE' + ]) + + def get_with_status(*args): + return_server.status = next(statuses) + return return_server + + self.fc.servers.get = mock.Mock(side_effect=get_with_status) + self.fc.client.post_servers_1234_action = mock.Mock( + return_value=(202, None)) scheduler.TaskRunner(instance.update, update_template)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + + self.fc.servers.get.assert_called_with('1234') + self.assertEqual(5, self.fc.servers.get.call_count) + self.fc.client.post_servers_1234_action.assert_has_calls([ + mock.call(body={'resize': {'flavorRef': 2}}), + mock.call(body={'confirmResize': None}), + ]) + self.assertEqual(2, + self.fc.client.post_servers_1234_action.call_count) def test_instance_update_instance_type_failed(self): """Test case for raising exception due to resize call failed. @@ -811,24 +742,19 @@ update_props['InstanceType'] = 'm1.small' update_template = instance.t.freeze(properties=update_props) - self.m.StubOutWithMock(self.fc.servers, 'get') - - def status_resize(*args): - return_server.status = 'RESIZE' - - def status_error(*args): - return_server.status = 'ERROR' - - self.fc.servers.get('1234').AndReturn(return_server) - self.fc.servers.get('1234').WithSideEffects( - status_resize).AndReturn(return_server) - self.fc.servers.get('1234').WithSideEffects( - status_error).AndReturn(return_server) - - self.m.StubOutWithMock(self.fc.client, 'post_servers_1234_action') - self.fc.client.post_servers_1234_action( - body={'resize': {'flavorRef': 2}}).AndReturn((202, None)) - self.m.ReplayAll() + statuses = iter([ + return_server.status, + 'RESIZE', + 'ERROR', + ]) + + def get_with_status(*args): + return_server.status = next(statuses) + return return_server + + self.fc.servers.get = mock.Mock(side_effect=get_with_status) + self.fc.client.post_servers_1234_action = mock.Mock( + return_value=(202, None)) updater = scheduler.TaskRunner(instance.update, update_template) error = self.assertRaises(exception.ResourceFailure, updater) @@ -837,7 +763,11 @@ "Resizing to '2' failed, status 'ERROR'", six.text_type(error)) self.assertEqual((instance.UPDATE, instance.FAILED), instance.state) - self.m.VerifyAll() + + self.fc.servers.get.assert_called_with('1234') + self.assertEqual(3, self.fc.servers.get.call_count) + self.fc.client.post_servers_1234_action.assert_called_once_with( + body={'resize': {'flavorRef': 2}}) def create_fake_iface(self, port, net, ip): class fake_interface(object): @@ -879,19 +809,18 @@ after = instance.t.freeze(properties=update_props) before = instance.t.freeze(properties=before_props) - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) - self.m.StubOutWithMock(return_server, 'interface_detach') - return_server.interface_detach( - 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) - self.m.StubOutWithMock(return_server, 'interface_attach') - return_server.interface_attach('34b752ec-14de-416a-8722-9531015e04a5', - None, None).AndReturn(None) - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(return_value=return_server) + return_server.interface_detach = mock.Mock(return_value=None) + return_server.interface_attach = mock.Mock(return_value=None) scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + + self.fc.servers.get.assert_called_with('1234') + return_server.interface_detach.assert_called_once_with( + 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46') + return_server.interface_attach.assert_called_once_with( + '34b752ec-14de-416a-8722-9531015e04a5', None, None) def test_instance_update_network_interfaces_old_include_new(self): """Test case for updating NetworkInterfaces when old prop includes new. @@ -921,17 +850,16 @@ after = instance.t.freeze(properties=update_props) before = instance.t.freeze(properties=before_props) - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) - self.m.StubOutWithMock(return_server, 'interface_detach') - return_server.interface_detach( - 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) - - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(return_value=return_server) + return_server.interface_detach = mock.Mock(return_value=None) scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) + self.fc.servers.get.assert_called_with('1234') + return_server.interface_detach.assert_called_once_with( + 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46') + def test_instance_update_network_interfaces_new_include_old(self): """Test case for updating NetworkInterfaces when new prop includes old. @@ -960,17 +888,16 @@ after = instance.t.freeze(properties=update_props) before = instance.t.freeze(properties=before_props) - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) - self.m.StubOutWithMock(return_server, 'interface_attach') - return_server.interface_attach('d1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', - None, None).AndReturn(None) - - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(return_value=return_server) + return_server.interface_attach = mock.Mock(return_value=None) scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) + self.fc.servers.get.assert_called_with('1234') + return_server.interface_attach.assert_called_once_with( + 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', None, None) + def test_instance_update_network_interfaces_new_old_all_different(self): """Tests updating NetworkInterfaces when new and old are different. @@ -999,22 +926,22 @@ after = instance.t.freeze(properties=update_props) before = instance.t.freeze(properties=before_props) - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) - self.m.StubOutWithMock(return_server, 'interface_detach') - return_server.interface_detach( - 'ea29f957-cd35-4364-98fb-57ce9732c10d').AndReturn(None) - self.m.StubOutWithMock(return_server, 'interface_attach') - return_server.interface_attach('d1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', - None, None).InAnyOrder().AndReturn(None) - return_server.interface_attach('34b752ec-14de-416a-8722-9531015e04a5', - None, None).InAnyOrder().AndReturn(None) - - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(return_value=return_server) + return_server.interface_detach = mock.Mock(return_value=None) + return_server.interface_attach = mock.Mock(return_value=None) scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) + self.fc.servers.get.assert_called_with('1234') + return_server.interface_detach.assert_called_once_with( + 'ea29f957-cd35-4364-98fb-57ce9732c10d') + return_server.interface_attach.assert_has_calls([ + mock.call('d1e9c73c-04fe-4e9e-983c-d5ef94cd1a46', None, None), + mock.call('34b752ec-14de-416a-8722-9531015e04a5', None, None), + ], any_order=True) + self.assertEqual(2, return_server.interface_attach.call_count) + def test_instance_update_network_interfaces_no_old(self): """Test case for updating NetworkInterfaces when there's no old prop. @@ -1039,24 +966,23 @@ update_props['NetworkInterfaces'] = new_interfaces update_template = instance.t.freeze(properties=update_props) - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) - self.m.StubOutWithMock(return_server, 'interface_list') - return_server.interface_list().AndReturn([iface]) - self.m.StubOutWithMock(return_server, 'interface_detach') - return_server.interface_detach( - 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) - self.m.StubOutWithMock(return_server, 'interface_attach') - return_server.interface_attach('ea29f957-cd35-4364-98fb-57ce9732c10d', - None, None).AndReturn(None) - return_server.interface_attach('34b752ec-14de-416a-8722-9531015e04a5', - None, None).AndReturn(None) - - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(return_value=return_server) + return_server.interface_list = mock.Mock(return_value=[iface]) + return_server.interface_detach = mock.Mock(return_value=None) + return_server.interface_attach = mock.Mock(return_value=None) scheduler.TaskRunner(instance.update, update_template)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + + self.fc.servers.get.assert_called_with('1234') + return_server.interface_list.assert_called_once_with() + return_server.interface_detach.assert_called_once_with( + 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46') + return_server.interface_attach.assert_has_calls([ + mock.call('ea29f957-cd35-4364-98fb-57ce9732c10d', None, None), + mock.call('34b752ec-14de-416a-8722-9531015e04a5', None, None), + ]) + self.assertEqual(2, return_server.interface_attach.call_count) def test_instance_update_network_interfaces_no_old_empty_new(self): """Test case for updating NetworkInterfaces when no old, no new prop. @@ -1076,20 +1002,20 @@ update_props['NetworkInterfaces'] = [] update_template = instance.t.freeze(properties=update_props) - self.m.StubOutWithMock(self.fc.servers, 'get') - self.fc.servers.get('1234').MultipleTimes().AndReturn(return_server) - self.m.StubOutWithMock(return_server, 'interface_list') - return_server.interface_list().AndReturn([iface]) - self.m.StubOutWithMock(return_server, 'interface_detach') - return_server.interface_detach( - 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) - self.m.StubOutWithMock(return_server, 'interface_attach') - return_server.interface_attach(None, None, None).AndReturn(None) - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(return_value=return_server) + return_server.interface_list = mock.Mock(return_value=[iface]) + return_server.interface_detach = mock.Mock(return_value=None) + return_server.interface_attach = mock.Mock(return_value=None) scheduler.TaskRunner(instance.update, update_template)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + + self.fc.servers.get.assert_called_with('1234') + return_server.interface_list.assert_called_once_with() + return_server.interface_detach.assert_called_once_with( + 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46') + return_server.interface_attach.assert_called_once_with(None, + None, None) def _test_instance_update_with_subnet(self, stack_name, new_interfaces=None, @@ -1120,32 +1046,30 @@ instance.reparse() - self.m.StubOutWithMock(self.fc.servers, 'get') - + self.fc.servers.get = mock.Mock(return_value=return_server) if need_update: - if multiple_get: - self.fc.servers.get('1234').MultipleTimes().AndReturn( - return_server) - else: - self.fc.servers.get('1234').AndReturn(return_server) - self.m.StubOutWithMock(return_server, 'interface_list') - return_server.interface_list().AndReturn([iface]) - self.m.StubOutWithMock(return_server, 'interface_detach') - return_server.interface_detach( - 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46').AndReturn(None) - self.m.StubOutWithMock(instance, '_build_nics') - instance._build_nics(new_interfaces, security_groups=None, - subnet_id=subnet_id).AndReturn(nics) - self.m.StubOutWithMock(return_server, 'interface_attach') - return_server.interface_attach( - 'ea29f957-cd35-4364-98fb-57ce9732c10d', - None, None).AndReturn(None) - - self.m.ReplayAll() + return_server.interface_list = mock.Mock(return_value=[iface]) + return_server.interface_detach = mock.Mock(return_value=None) + instance._build_nics = mock.Mock(return_value=nics) + return_server.interface_attach = mock.Mock(return_value=None) scheduler.TaskRunner(instance.update, after, before)() self.assertEqual((instance.UPDATE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + + if need_update: + self.fc.servers.get.assert_called_with('1234') + if not multiple_get: + self.fc.servers.get.assert_called_once() + return_server.interface_list.assert_called_once_with() + return_server.interface_detach.assert_called_once_with( + 'd1e9c73c-04fe-4e9e-983c-d5ef94cd1a46') + instance._build_nics.assert_called_once_with(new_interfaces, + security_groups=None, + subnet_id=subnet_id) + return_server.interface_attach.assert_called_once_with( + 'ea29f957-cd35-4364-98fb-57ce9732c10d', None, None) + else: + self.fc.servers.get.assert_not_called() def test_instance_update_network_interfaces_empty_new_with_subnet(self): """Test update NetworkInterfaces to empty, and update with subnet.""" @@ -1175,7 +1099,6 @@ 'in_update2') self.stub_ImageConstraint_validate() - self.m.ReplayAll() update_props = self.instance_props.copy() update_props['ImageId'] = 'mustreplace' @@ -1183,28 +1106,23 @@ updater = scheduler.TaskRunner(instance.update, update_template) self.assertRaises(resource.UpdateReplace, updater) - self.m.VerifyAll() - def test_instance_status_build(self): return_server = self.fc.servers.list()[0] instance = self._setup_test_instance(return_server, 'in_sts_build') instance.resource_id = '1234' - self.m.StubOutWithMock(self.fc.servers, 'get') - # Bind fake get method which Instance.check_create_complete will call def status_active(*args): return_server.status = 'ACTIVE' + return return_server - self.fc.servers.get(instance.resource_id).WithSideEffects( - status_active).MultipleTimes().AndReturn(return_server) - - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(side_effect=status_active) scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + + self.fc.servers.get.assert_called_with(instance.resource_id) def _test_instance_status_suspend(self, name, state=('CREATE', 'COMPLETE')): @@ -1218,17 +1136,13 @@ d2 = copy.deepcopy(d1) d1['server']['status'] = 'ACTIVE' d2['server']['status'] = 'SUSPENDED' - self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') - get = self.fc.client.get_servers_1234 - get().AndReturn((200, d1)) - get().AndReturn((200, d1)) - get().AndReturn((200, d2)) - self.m.ReplayAll() + self.fc.client.get_servers_1234 = mock.Mock( + side_effect=[(200, d1), (200, d1), (200, d2)]) scheduler.TaskRunner(instance.suspend)() self.assertEqual((instance.SUSPEND, instance.COMPLETE), instance.state) - self.m.VerifyAll() + self.assertEqual(3, self.fc.client.get_servers_1234.call_count) def test_instance_suspend_in_create_complete(self): self._test_instance_status_suspend( @@ -1256,19 +1170,15 @@ d2 = copy.deepcopy(d1) d1['server']['status'] = 'SUSPENDED' d2['server']['status'] = 'ACTIVE' - self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') - get = self.fc.client.get_servers_1234 - get().AndReturn((200, d1)) - get().AndReturn((200, d1)) - get().AndReturn((200, d2)) - self.m.ReplayAll() + self.fc.client.get_servers_1234 = mock.Mock( + side_effect=[(200, d1), (200, d1), (200, d2)]) instance.state_set(instance.SUSPEND, instance.COMPLETE) scheduler.TaskRunner(instance.resume)() self.assertEqual((instance.RESUME, instance.COMPLETE), instance.state) - self.m.VerifyAll() + self.assertEqual(3, self.fc.client.get_servers_1234.call_count) def test_instance_resume_in_suspend_complete(self): self._test_instance_status_resume( @@ -1290,13 +1200,10 @@ 'in_resume_wait') instance.resource_id = '1234' - self.m.ReplayAll() - self.m.StubOutWithMock(self.fc.client, 'get_servers_1234') - get = self.fc.client.get_servers_1234 - get().AndRaise(fakes_nova.fake_exception(status_code=500, - message='VIKINGS!')) - self.m.ReplayAll() + self.fc.client.get_servers_1234 = mock.Mock( + side_effect=fakes_nova.fake_exception(status_code=500, + message='VIKINGS!')) instance.state_set(instance.SUSPEND, instance.COMPLETE) @@ -1304,7 +1211,7 @@ ex = self.assertRaises(exception.ResourceFailure, resumer) self.assertIn('VIKINGS!', ex.message) - self.m.VerifyAll() + self.fc.client.get_servers_1234.assert_called() def test_instance_status_build_spawning(self): self._test_instance_status_not_build_active('BUILD(SPAWNING)') @@ -1342,26 +1249,23 @@ 'in_sts_bld') instance.resource_id = '1234' - self.m.StubOutWithMock(self.fc.servers, 'get') + status_calls = [] # Bind fake get method which Instance.check_create_complete will call - def status_not_build(*args): - return_server.status = uncommon_status - - def status_active(*args): - return_server.status = 'ACTIVE' - - self.fc.servers.get(instance.resource_id).WithSideEffects( - status_not_build).AndReturn(return_server) - self.fc.servers.get(instance.resource_id).WithSideEffects( - status_active).MultipleTimes().AndReturn(return_server) + def get_with_status(*args): + if not status_calls: + return_server.status = uncommon_status + else: + return_server.status = 'ACTIVE' + status_calls.append(None) + return return_server - self.m.ReplayAll() + self.fc.servers.get = mock.Mock(side_effect=get_with_status) scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) - self.m.VerifyAll() + self.assertGreaterEqual(self.fc.servers.get.call_count, 2) def test_build_nics(self): return_server = self.fc.servers.list()[1] @@ -1403,6 +1307,9 @@ Test the security groups defined in heat template can be associated to a new created port. """ + self.nclient = mock.Mock(spec=neutronclient.Client) + self.patchobject(neutronclient, 'Client', return_value=self.nclient) + return_server = self.fc.servers.list()[1] instance = self._create_test_instance(return_server, 'build_nics2') @@ -1452,43 +1359,33 @@ all_uuids=False, get_secgroup_raises=None): fake_groups_list, props = self._get_fake_properties(sg) - nclient = neutronclient.Client() - self.m.StubOutWithMock(instance, 'neutron') - instance.neutron().MultipleTimes().AndReturn(nclient) - if not all_uuids: # list_security_groups only gets called when none of the requested # groups look like UUIDs. - self.m.StubOutWithMock( - neutronclient.Client, 'list_security_groups') - neutronclient.Client.list_security_groups().AndReturn( - fake_groups_list) - self.m.StubOutWithMock(neutron.NeutronClientPlugin, - 'network_id_from_subnet_id') - neutron.NeutronClientPlugin.network_id_from_subnet_id( - 'fake_subnet_id').MultipleTimes().AndReturn('fake_network_id') - - if not get_secgroup_raises: - self.m.StubOutWithMock(neutronclient.Client, 'create_port') - neutronclient.Client.create_port( - {'port': props}).MultipleTimes().AndReturn( - {'port': {'id': 'fake_port_id'}}) - self.stub_keystoneclient() - self.m.ReplayAll() + self.nclient.list_security_groups = mock.Mock( + return_value=fake_groups_list) + self.patchobject(neutron.NeutronClientPlugin, + 'network_id_from_subnet_id', + return_value='fake_network_id') if get_secgroup_raises: self.assertRaises(get_secgroup_raises, instance._build_nics, None, security_groups=security_groups, subnet_id='fake_subnet_id') else: + self.nclient.create_port = mock.Mock( + return_value={'port': {'id': 'fake_port_id'}}) + self.stub_keystoneclient() + self.assertEqual( [{'port-id': 'fake_port_id'}], instance._build_nics(None, security_groups=security_groups, subnet_id='fake_subnet_id')) - self.m.VerifyAll() - self.m.UnsetStubs() + self.nclient.create_port.assert_called_with({'port': props}) + if not all_uuids: + self.nclient.list_security_groups.assert_called_once_with() def _get_fake_properties(self, sg='one'): fake_groups_list = { @@ -1554,27 +1451,27 @@ return_server = self.fc.servers.list()[1] instance = self._setup_test_instance(return_server, 'default_user') metadata = instance.metadata_get() - self.m.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') - nova.NovaClientPlugin.build_userdata( - metadata, 'wordpress', 'ec2-user') - self.m.ReplayAll() + self.patchobject(nova.NovaClientPlugin, 'build_userdata', + return_value=None) + scheduler.TaskRunner(instance.create)() - self.m.VerifyAll() + + nova.NovaClientPlugin.build_userdata.assert_called_once_with( + metadata, 'wordpress', 'ec2-user') def test_instance_create_with_volumes(self): return_server = self.fc.servers.list()[1] self.stub_VolumeConstraint_validate() instance = self._setup_test_instance(return_server, 'with_volumes', - stub_complete=True, volumes=True) + self.fc.servers.get = mock.Mock(return_value=return_server) attach_mock = self.patchobject(nova.NovaClientPlugin, 'attach_volume', side_effect=['cccc', 'dddd']) check_attach_mock = self.patchobject(cinder.CinderClientPlugin, 'check_attach_volume_complete', side_effect=[False, True, False, True]) - self.m.ReplayAll() scheduler.TaskRunner(instance.create)() self.assertEqual((instance.CREATE, instance.COMPLETE), instance.state) @@ -1588,4 +1485,16 @@ mock.call('cccc'), mock.call('dddd'), mock.call('dddd')]) - self.m.VerifyAll() + bdm = {"vdb": "9ef5496e-7426-446a-bbc8-01f84d9c9972:snap::True"} + self.mock_create.assert_called_once_with( + image=1, flavor=1, key_name='test', + name=utils.PhysName( + self.stack.name, + instance.name, + limit=instance.physical_resource_name_limit), + security_groups=None, + userdata=mock.ANY, + scheduler_hints={'foo': ['spam', 'ham', 'baz'], 'bar': 'eggs'}, + meta=None, nics=None, availability_zone=None, + block_device_mapping=bdm) + self.fc.servers.get.assert_called_with(return_server.id) diff -Nru heat-11.0.0~b1/heat/tests/aws/test_network_interface.py heat-11.0.0~b2/heat/tests/aws/test_network_interface.py --- heat-11.0.0~b1/heat/tests/aws/test_network_interface.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/aws/test_network_interface.py 2018-06-07 22:12:28.000000000 +0000 @@ -40,13 +40,13 @@ def setUp(self): super(NetworkInterfaceTest, self).setUp() self.ctx = utils.dummy_context() - self.m.StubOutWithMock(neutronclient.Client, 'show_subnet') - self.m.StubOutWithMock(neutronclient.Client, 'create_port') - self.m.StubOutWithMock(neutronclient.Client, 'delete_port') - self.m.StubOutWithMock(neutronclient.Client, 'update_port') + self.m_ss = self.patchobject(neutronclient.Client, 'show_subnet') + self.m_cp = self.patchobject(neutronclient.Client, 'create_port') + self.m_dp = self.patchobject(neutronclient.Client, 'delete_port') + self.m_up = self.patchobject(neutronclient.Client, 'update_port') def mock_show_subnet(self): - neutronclient.Client.show_subnet('ssss').AndReturn({ + self.m_ss.return_value = { 'subnet': { 'name': 'my_subnet', 'network_id': 'nnnn', @@ -58,18 +58,18 @@ 'cidr': '10.0.0.0/24', 'id': 'ssss', 'enable_dhcp': False, - }}) + }} def mock_create_network_interface(self, stack_name='my_stack', resource_name='my_nic', security_groups=None): self.nic_name = utils.PhysName(stack_name, resource_name) - port = {'network_id': 'nnnn', - 'fixed_ips': [{ - 'subnet_id': u'ssss' - }], - 'name': self.nic_name, - 'admin_state_up': True} + self.port = {'network_id': 'nnnn', + 'fixed_ips': [{ + 'subnet_id': u'ssss' + }], + 'name': self.nic_name, + 'admin_state_up': True} port_info = { 'port': { @@ -92,20 +92,12 @@ } if security_groups is not None: - port['security_groups'] = security_groups + self.port['security_groups'] = security_groups port_info['security_groups'] = security_groups else: port_info['security_groups'] = ['default'] - neutronclient.Client.create_port({'port': port}).AndReturn(port_info) - - def mock_update_network_interface(self, update_props, port_id='pppp'): - neutronclient.Client.update_port( - port_id, - {'port': update_props}).AndReturn(None) - - def mock_delete_network_interface(self, port_id='pppp'): - neutronclient.Client.delete_port(port_id).AndReturn(None) + self.m_cp.return_value = port_info def test_network_interface_create_update_delete(self): my_stack = utils.parse_stack(test_template, @@ -120,10 +112,6 @@ update_sg_ids = ['0389f747-7785-4757-b7bb-2ab07e4b09c3'] update_props['security_groups'] = update_sg_ids - self.mock_update_network_interface(update_props) - self.mock_delete_network_interface() - - self.m.ReplayAll() # create the nic without GroupSet self.assertIsNone(nic_rsrc.validate()) scheduler.TaskRunner(nic_rsrc.create)() @@ -143,4 +131,7 @@ scheduler.TaskRunner(nic_rsrc.delete)() self.assertEqual((nic_rsrc.DELETE, nic_rsrc.COMPLETE), nic_rsrc.state) - self.m.VerifyAll() + self.m_ss.assert_called_once_with('ssss') + self.m_cp.assert_called_once_with({'port': self.port}) + self.m_up.assert_called_once_with('pppp', {'port': update_props}) + self.m_dp.assert_called_once_with('pppp') diff -Nru heat-11.0.0~b1/heat/tests/aws/test_security_group.py heat-11.0.0~b2/heat/tests/aws/test_security_group.py --- heat-11.0.0~b1/heat/tests/aws/test_security_group.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/aws/test_security_group.py 2018-06-07 22:12:28.000000000 +0000 @@ -14,6 +14,7 @@ import collections import copy +import mock from neutronclient.common import exceptions as neutron_exc from neutronclient.v2_0 import client as neutronclient @@ -66,16 +67,21 @@ def setUp(self): super(SecurityGroupTest, self).setUp() - self.m.StubOutWithMock(neutronclient.Client, 'create_security_group') - self.m.StubOutWithMock( + self.m_csg = self.patchobject(neutronclient.Client, + 'create_security_group') + self.m_csgr = self.patchobject( neutronclient.Client, 'create_security_group_rule') - self.m.StubOutWithMock(neutronclient.Client, 'show_security_group') - self.m.StubOutWithMock( + self.m_ssg = self.patchobject(neutronclient.Client, + 'show_security_group') + self.m_dsgr = self.patchobject( neutronclient.Client, 'delete_security_group_rule') - self.m.StubOutWithMock(neutronclient.Client, 'delete_security_group') - self.m.StubOutWithMock(neutronclient.Client, 'update_security_group') + self.m_dsg = self.patchobject( + neutronclient.Client, 'delete_security_group') + self.m_usg = self.patchobject( + neutronclient.Client, 'update_security_group') self.patchobject(resource.Resource, 'is_using_neutron', return_value=True) + self.sg_name = utils.PhysName('test_stack', 'the_sg') def mock_no_neutron(self): self.patchobject(resource.Resource, 'is_using_neutron', @@ -100,17 +106,73 @@ self.assertEqual(ref_id, rsrc.FnGetRefId()) self.assertEqual(metadata, dict(rsrc.metadata_get())) - def stubout_neutron_create_security_group(self): - sg_name = utils.PhysName('test_stack', 'the_sg') - neutronclient.Client.create_security_group({ + def validate_create_security_group_rule_calls(self): + expected = [ + mock.call( + {'security_group_rule': { + 'security_group_id': 'aaaa', 'protocol': 'tcp', + 'port_range_max': 22, 'direction': 'ingress', + 'remote_group_id': None, 'ethertype': 'IPv4', + 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 22}} + ), + mock.call( + {'security_group_rule': { + 'security_group_id': 'aaaa', 'protocol': 'tcp', + 'port_range_max': 80, 'direction': 'ingress', + 'remote_group_id': None, 'ethertype': 'IPv4', + 'remote_ip_prefix': '0.0.0.0/0', 'port_range_min': 80}} + ), + mock.call( + {'security_group_rule': { + 'security_group_id': 'aaaa', 'protocol': 'tcp', + 'port_range_max': None, 'direction': 'ingress', + 'remote_group_id': 'wwww', 'ethertype': 'IPv4', + 'remote_ip_prefix': None, 'port_range_min': None}} + ), + mock.call( + {'security_group_rule': { + 'security_group_id': 'aaaa', 'protocol': 'tcp', + 'port_range_max': 22, 'direction': 'egress', + 'remote_group_id': None, 'ethertype': 'IPv4', + 'remote_ip_prefix': '10.0.1.0/24', 'port_range_min': 22}} + ), + mock.call( + {'security_group_rule': { + 'security_group_id': 'aaaa', 'protocol': None, + 'port_range_max': None, 'direction': 'egress', + 'remote_group_id': 'xxxx', 'ethertype': 'IPv4', + 'remote_ip_prefix': None, 'port_range_min': None}}) + ] + + self.assertEqual(expected, self.m_csgr.call_args_list) + + def validate_delete_security_group_rule(self): + self.assertEqual( + [mock.call('aaaa-1'), + mock.call('aaaa-2'), + mock.call('bbbb'), + mock.call('cccc'), + mock.call('dddd'), + mock.call('eeee'), + mock.call('ffff'), + ], + self.m_dsgr.call_args_list) + + def validate_stubout_neutron_create_security_group(self): + self.m_csg.assert_called_once_with({ 'security_group': { - 'name': sg_name, + 'name': self.sg_name, 'description': 'HTTP and SSH access' } - }).AndReturn({ + }) + self.validate_delete_security_group_rule() + self.validate_create_security_group_rule_calls() + + def stubout_neutron_create_security_group(self, mock_csgr=True): + self.m_csg.return_value = { 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'name': sg_name, + 'name': self.sg_name, 'description': 'HTTP and SSH access', 'security_group_rules': [{ "direction": "egress", @@ -137,136 +199,77 @@ }], 'id': 'aaaa' } - }) - - neutronclient.Client.delete_security_group_rule('aaaa-1').AndReturn( - None) - neutronclient.Client.delete_security_group_rule('aaaa-2').AndReturn( - None) - - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 22, - 'ethertype': 'IPv4', - 'port_range_max': 22, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 22, - 'ethertype': 'IPv4', - 'port_range_max': 22, - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'bbbb' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 80, - 'ethertype': 'IPv4', - 'port_range_max': 80, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 80, - 'ethertype': 'IPv4', - 'port_range_max': 80, - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'cccc' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': 'wwww', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': 'wwww', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'dddd' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': '10.0.1.0/24', - 'port_range_min': 22, - 'ethertype': 'IPv4', - 'port_range_max': 22, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': '10.0.1.0/24', - 'port_range_min': 22, - 'ethertype': 'IPv4', - 'port_range_max': 22, - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'eeee' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': 'xxxx', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': 'xxxx', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa', - 'id': 'ffff' - } - }) + } + if mock_csgr: + self.m_csgr.side_effect = [ + { + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'port_range_min': 22, + 'ethertype': 'IPv4', + 'port_range_max': 22, + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'bbbb' + }}, + { + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'port_range_min': 80, + 'ethertype': 'IPv4', + 'port_range_max': 80, + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'cccc' + } + }, + { + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': 'wwww', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'dddd' + } + }, + { + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': None, + 'remote_ip_prefix': '10.0.1.0/24', + 'port_range_min': 22, + 'ethertype': 'IPv4', + 'port_range_max': 22, + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'eeee' + } + }, + { + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': 'xxxx', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': None, + 'security_group_id': 'aaaa', + 'id': 'ffff' + } + } + ] def stubout_neutron_get_security_group(self): - neutronclient.Client.show_security_group('aaaa').AndReturn({ + self.m_ssg.return_value = { 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': 'sc1', @@ -327,25 +330,14 @@ 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }], - 'id': 'aaaa'}}) - - def stubout_neutron_delete_security_group_rules(self): - self.stubout_neutron_get_security_group() - neutronclient.Client.delete_security_group_rule('bbbb').AndReturn(None) - neutronclient.Client.delete_security_group_rule('cccc').AndReturn(None) - neutronclient.Client.delete_security_group_rule('dddd').AndReturn(None) - neutronclient.Client.delete_security_group_rule('eeee').AndReturn(None) - neutronclient.Client.delete_security_group_rule('ffff').AndReturn(None) + 'id': 'aaaa'}} def test_security_group_neutron(self): # create script self.stubout_neutron_create_security_group() - # delete script - self.stubout_neutron_delete_security_group_rules() - neutronclient.Client.delete_security_group('aaaa').AndReturn(None) + self.stubout_neutron_get_security_group() - self.m.ReplayAll() stack = self.create_stack(self.test_template_neutron) sg = stack['the_sg'] @@ -353,95 +345,28 @@ self.assertResourceState(sg, 'aaaa') stack.delete() - self.m.VerifyAll() + self.validate_stubout_neutron_create_security_group() + self.m_ssg.assert_called_once_with('aaaa') + self.m_dsg.assert_called_once_with('aaaa') def test_security_group_neutron_exception(self): # create script - sg_name = utils.PhysName('test_stack', 'the_sg') - neutronclient.Client.create_security_group({ - 'security_group': { - 'name': sg_name, - 'description': 'HTTP and SSH access' - } - }).AndReturn({ + self.m_csg.return_value = { 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'name': sg_name, + 'name': self.sg_name, 'description': 'HTTP and SSH access', 'security_group_rules': [], 'id': 'aaaa' } - }) - - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 22, - 'ethertype': 'IPv4', - 'port_range_max': 22, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 80, - 'ethertype': 'IPv4', - 'port_range_max': 80, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': 'wwww', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': '10.0.1.0/24', - 'port_range_min': 22, - 'ethertype': 'IPv4', - 'port_range_max': 22, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': 'xxxx', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) + } + self.m_csgr.side_effect = neutron_exc.Conflict # delete script - neutronclient.Client.show_security_group('aaaa').AndReturn({ - 'security_group': { + self.m_dsgr.side_effect = neutron_exc.NeutronClientException( + status_code=404) + self.m_ssg.side_effect = [ + {'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': 'sc1', 'description': '', @@ -501,24 +426,9 @@ 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'port_range_min': None }], - 'id': 'aaaa'}}) - neutronclient.Client.delete_security_group_rule('bbbb').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group_rule('cccc').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group_rule('dddd').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group_rule('eeee').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group_rule('ffff').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group('aaaa').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - - neutronclient.Client.show_security_group('aaaa').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) + 'id': 'aaaa'}}, + neutron_exc.NeutronClientException(status_code=404)] - self.m.ReplayAll() stack = self.create_stack(self.test_template_neutron) sg = stack['the_sg'] @@ -531,99 +441,66 @@ sg.resource_id = 'aaaa' stack.delete() - self.m.VerifyAll() + self.m_csg.assert_called_once_with({ + 'security_group': { + 'name': self.sg_name, + 'description': 'HTTP and SSH access' + } + }) + self.validate_create_security_group_rule_calls() + self.assertEqual( + [mock.call('aaaa'), mock.call('aaaa')], + self.m_ssg.call_args_list) + self.assertEqual( + [mock.call('bbbb'), mock.call('cccc'), mock.call('dddd'), + mock.call('eeee'), mock.call('ffff')], self.m_dsgr.call_args_list) def test_security_group_neutron_update(self): # create script - self.stubout_neutron_create_security_group() + self.stubout_neutron_create_security_group(mock_csgr=False) # update script # delete old not needed rules self.stubout_neutron_get_security_group() - neutronclient.Client.delete_security_group_rule( - 'bbbb').InAnyOrder().AndReturn(None) - neutronclient.Client.delete_security_group_rule( - 'dddd').InAnyOrder().AndReturn(None) - neutronclient.Client.delete_security_group_rule( - 'eeee').InAnyOrder().AndReturn(None) # create missing rules - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 443, - 'ethertype': 'IPv4', - 'port_range_max': 443, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).InAnyOrder().AndReturn({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 443, - 'ethertype': 'IPv4', - 'port_range_max': 443, - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'bbbb' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': 'zzzz', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).InAnyOrder().AndReturn({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': 'zzzz', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'dddd' - } - }) - - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 22, - 'ethertype': 'IPv4', - 'port_range_max': 22, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).InAnyOrder().AndReturn({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': 22, - 'ethertype': 'IPv4', - 'port_range_max': 22, - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'eeee' + self.m_csgr.side_effect = [ + { + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'port_range_min': 443, + 'ethertype': 'IPv4', + 'port_range_max': 443, + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'bbbb'} + }, { + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': 'zzzz', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'dddd'} + }, { + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'port_range_min': 22, + 'ethertype': 'IPv4', + 'port_range_max': 22, + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'eeee' + } } - }) - - self.m.ReplayAll() - + ] stack = self.create_stack(self.test_template_neutron) sg = stack['the_sg'] self.assertResourceState(sg, 'aaaa') @@ -653,8 +530,11 @@ scheduler.TaskRunner(sg.update, after)() self.assertEqual((sg.UPDATE, sg.COMPLETE), sg.state) - - self.m.VerifyAll() + self.m_dsgr.assert_has_calls( + [mock.call('aaaa-1'), mock.call('aaaa-2'), mock.call('eeee'), + mock.call('dddd'), mock.call('bbbb')], + any_order=True) + self.m_ssg.assert_called_once_with('aaaa') def test_security_group_neutron_update_with_empty_rules(self): # create script @@ -663,12 +543,6 @@ # update script # delete old not needed rules self.stubout_neutron_get_security_group() - neutronclient.Client.delete_security_group_rule( - 'eeee').InAnyOrder().AndReturn(None) - neutronclient.Client.delete_security_group_rule( - 'ffff').InAnyOrder().AndReturn(None) - - self.m.ReplayAll() stack = self.create_stack(self.test_template_neutron) sg = stack['the_sg'] @@ -681,5 +555,8 @@ scheduler.TaskRunner(sg.update, after)() self.assertEqual((sg.UPDATE, sg.COMPLETE), sg.state) - - self.m.VerifyAll() + self.m_ssg.assert_called_once_with('aaaa') + self.m_dsgr.assert_has_calls( + [mock.call('aaaa-1'), mock.call('aaaa-2'), mock.call('eeee'), + mock.call('ffff')], + any_order=True) diff -Nru heat-11.0.0~b1/heat/tests/aws/test_waitcondition.py heat-11.0.0~b2/heat/tests/aws/test_waitcondition.py --- heat-11.0.0~b1/heat/tests/aws/test_waitcondition.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/aws/test_waitcondition.py 2018-06-07 22:12:28.000000000 +0000 @@ -100,25 +100,19 @@ stack.store() if stub: - id = identifier.ResourceIdentifier('test_tenant', stack.name, - stack.id, '', 'WaitHandle') - self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'identifier') - aws_wch.WaitConditionHandle.identifier( - ).MultipleTimes().AndReturn(id) - + res_id = identifier.ResourceIdentifier('test_tenant', stack.name, + stack.id, '', 'WaitHandle') + self.m_id = self.patchobject( + aws_wch.WaitConditionHandle, 'identifier', return_value=res_id) if stub_status: - self.m.StubOutWithMock(aws_wch.WaitConditionHandle, - 'get_status') + self.m_gs = self.patchobject(aws_wch.WaitConditionHandle, + 'get_status') return stack def test_post_success_to_handle(self): self.stack = self.create_stack() - aws_wch.WaitConditionHandle.get_status().AndReturn([]) - aws_wch.WaitConditionHandle.get_status().AndReturn([]) - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) - - self.m.ReplayAll() + self.m_gs.side_effect = [[], [], ['SUCCESS']] self.stack.create() @@ -129,15 +123,13 @@ r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'WaitHandle', self.stack.id) self.assertEqual('WaitHandle', r.name) - self.m.VerifyAll() + self.assertEqual(3, self.m_gs.call_count) + + self.assertEqual(1, self.m_id.call_count) def test_post_failure_to_handle(self): self.stack = self.create_stack() - aws_wch.WaitConditionHandle.get_status().AndReturn([]) - aws_wch.WaitConditionHandle.get_status().AndReturn([]) - aws_wch.WaitConditionHandle.get_status().AndReturn(['FAILURE']) - - self.m.ReplayAll() + self.m_gs.side_effect = [[], [], ['FAILURE']] self.stack.create() @@ -149,19 +141,17 @@ r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'WaitHandle', self.stack.id) self.assertEqual('WaitHandle', r.name) - self.m.VerifyAll() + self.assertEqual(3, self.m_gs.call_count) + self.assertEqual(1, self.m_id.call_count) def test_post_success_to_handle_count(self): self.stack = self.create_stack(template=test_template_wc_count) - aws_wch.WaitConditionHandle.get_status().AndReturn([]) - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', - 'SUCCESS']) - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', - 'SUCCESS', - 'SUCCESS']) - - self.m.ReplayAll() + self.m_gs.side_effect = [ + [], + ['SUCCESS'], + ['SUCCESS', 'SUCCESS'], + ['SUCCESS', 'SUCCESS', 'SUCCESS'] + ] self.stack.create() @@ -172,16 +162,12 @@ r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'WaitHandle', self.stack.id) self.assertEqual('WaitHandle', r.name) - self.m.VerifyAll() + self.assertEqual(4, self.m_gs.call_count) + self.assertEqual(1, self.m_id.call_count) def test_post_failure_to_handle_count(self): self.stack = self.create_stack(template=test_template_wc_count) - aws_wch.WaitConditionHandle.get_status().AndReturn([]) - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', - 'FAILURE']) - - self.m.ReplayAll() + self.m_gs.side_effect = [[], ['SUCCESS'], ['SUCCESS', 'FAILURE']] self.stack.create() @@ -193,14 +179,15 @@ r = resource_objects.Resource.get_by_name_and_stack( self.stack.context, 'WaitHandle', self.stack.id) self.assertEqual('WaitHandle', r.name) - self.m.VerifyAll() + self.assertEqual(3, self.m_gs.call_count) + self.assertEqual(1, self.m_id.call_count) def test_timeout(self): self.stack = self.create_stack() # Avoid the stack create exercising the timeout code at the same time - self.m.StubOutWithMock(self.stack, 'timeout_secs') - self.stack.timeout_secs().MultipleTimes().AndReturn(None) + m_ts = self.patchobject(self.stack, 'timeout_secs', return_value=None) + self.m_gs.return_value = [] now = timeutils.utcnow() periods = [0, 0.001, 0.1, 4.1, 5.1] @@ -209,11 +196,6 @@ timeutils.set_time_override(fake_clock) self.addCleanup(timeutils.clear_time_override) - aws_wch.WaitConditionHandle.get_status( - ).MultipleTimes().AndReturn([]) - - self.m.ReplayAll() - self.stack.create() rsrc = self.stack['WaitForTheHandle'] @@ -221,14 +203,14 @@ self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) reason = rsrc.status_reason self.assertTrue(reason.startswith('WaitConditionTimeout:')) - - self.m.VerifyAll() + self.assertEqual(1, m_ts.call_count) + self.assertEqual(1, self.m_gs.call_count) + self.assertEqual(1, self.m_id.call_count) def test_FnGetAtt(self): self.stack = self.create_stack() - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) + self.m_gs.return_value = ['SUCCESS'] - self.m.ReplayAll() self.stack.create() rsrc = self.stack['WaitForTheHandle'] @@ -254,7 +236,8 @@ self.assertIsInstance(wc_att, six.string_types) self.assertEqual({"123": "foo", "456": "dog"}, json.loads(wc_att)) self.assertEqual('status:SUCCESS reason:cat', ret) - self.m.VerifyAll() + self.assertEqual(1, self.m_gs.call_count) + self.assertEqual(1, self.m_id.call_count) def test_FnGetRefId_resource_name(self): self.stack = self.create_stack() @@ -286,8 +269,6 @@ self.assertEqual('http://convg_signed_url', rsrc.FnGetRefId()) def test_validate_handle_url_bad_stackid(self): - self.m.ReplayAll() - stack_id = 'STACK_HUBSID_1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + @@ -298,16 +279,11 @@ t['Resources']['WaitForTheHandle']['Properties']['Handle'] = badhandle self.stack = self.create_stack(template=json.dumps(t), stub=False, stack_id=stack_id) - self.m.ReplayAll() rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) - self.m.VerifyAll() - def test_validate_handle_url_bad_stackname(self): - self.m.ReplayAll() - stack_id = 'STACKABCD1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + @@ -321,11 +297,7 @@ rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) - self.m.VerifyAll() - def test_validate_handle_url_bad_tenant(self): - self.m.ReplayAll() - stack_id = 'STACKABCD1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + @@ -339,11 +311,7 @@ rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) - self.m.VerifyAll() - def test_validate_handle_url_bad_resource(self): - self.m.ReplayAll() - stack_id = 'STACK_HUBR_1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + @@ -357,10 +325,7 @@ rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) - self.m.VerifyAll() - def test_validate_handle_url_bad_resource_type(self): - self.m.ReplayAll() stack_id = 'STACKABCD1234' t = json.loads(test_template_waitcondition) badhandle = ("http://server.test:8000/v1/waitcondition/" + @@ -374,8 +339,6 @@ rsrc = self.stack['WaitForTheHandle'] self.assertRaises(ValueError, rsrc.handle_create) - self.m.VerifyAll() - class WaitConditionHandleTest(common.HeatTestCase): def create_stack(self, stack_name=None, stack_id=None): @@ -395,15 +358,18 @@ self.stack_id = stack.id # Stub waitcondition status so all goes CREATE_COMPLETE - self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'get_status') - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) - - id = identifier.ResourceIdentifier('test_tenant', stack.name, - stack.id, '', 'WaitHandle') - self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'identifier') - aws_wch.WaitConditionHandle.identifier().MultipleTimes().AndReturn(id) - self.m.ReplayAll() - stack.create() + with mock.patch.object(aws_wch.WaitConditionHandle, + 'get_status') as m_gs: + m_gs.return_value = ['SUCCESS'] + res_id = identifier.ResourceIdentifier('test_tenant', stack.name, + stack.id, '', 'WaitHandle') + with mock.patch.object(aws_wch.WaitConditionHandle, + 'identifier') as m_id: + m_id.return_value = res_id + stack.create() + rsrc = stack['WaitHandle'] + self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) + self.assertEqual(rsrc.resource_id, rsrc.data().get('user_id')) return stack def test_handle(self): @@ -413,11 +379,9 @@ self.stack = self.create_stack(stack_id=stack_id, stack_name=stack_name) - self.m.StubOutWithMock(self.stack.clients.client_plugin('heat'), - 'get_heat_cfn_url') - self.stack.clients.client_plugin('heat').get_heat_cfn_url().AndReturn( - 'http://server.test:8000/v1') - self.m.ReplayAll() + m_get_cfn_url = mock.Mock(return_value='http://server.test:8000/v1') + self.stack.clients.client_plugin( + 'heat').get_heat_cfn_url = m_get_cfn_url rsrc = self.stack['WaitHandle'] self.assertEqual(rsrc.resource_id, rsrc.data().get('user_id')) # clear the url @@ -445,14 +409,11 @@ actual_params = parse.parse_qs(actual_url.split("?", 1)[1]) self.assertEqual(expected_params, actual_params) self.assertTrue(connection_url.startswith(connection_url)) - - self.m.VerifyAll() + self.assertEqual(1, m_get_cfn_url.call_count) def test_handle_signal(self): self.stack = self.create_stack() rsrc = self.stack['WaitHandle'] - self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - self.assertEqual(rsrc.resource_id, rsrc.data().get('user_id')) test_metadata = {'Data': 'foo', 'Reason': 'bar', 'Status': 'SUCCESS', 'UniqueId': '123'} rsrc.handle_signal(test_metadata) @@ -460,13 +421,10 @@ u'Reason': u'bar', u'Status': u'SUCCESS'}} self.assertEqual(handle_metadata, rsrc.metadata_get()) - self.m.VerifyAll() def test_handle_signal_invalid(self): self.stack = self.create_stack() rsrc = self.stack['WaitHandle'] - self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - self.assertEqual(rsrc.resource_id, rsrc.data().get('user_id')) # handle_signal should raise a ValueError if the metadata # is missing any of the expected keys err_metadata = {'Data': 'foo', 'Status': 'SUCCESS', 'UniqueId': '123'} @@ -503,17 +461,10 @@ 'Status': 'FAIL', 'UniqueId': '123'} self.assertRaises(ValueError, rsrc.handle_signal, err_metadata) - self.m.VerifyAll() def test_get_status(self): self.stack = self.create_stack() rsrc = self.stack['WaitHandle'] - self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - self.assertEqual(rsrc.resource_id, rsrc.data().get('user_id')) - # UnsetStubs, don't want get_status stubbed anymore.. - self.m.VerifyAll() - self.m.UnsetStubs() - self.assertEqual([], rsrc.get_status()) test_metadata = {'Data': 'foo', 'Reason': 'bar', @@ -551,7 +502,6 @@ ret = rsrc.handle_signal(test_metadata) self.assertEqual(['hoo'], rsrc.get_status_reason('FAILURE')) self.assertEqual('status:FAILURE reason:hoo', ret) - self.m.VerifyAll() class WaitConditionUpdateTest(common.HeatTestCase): @@ -569,14 +519,19 @@ with utils.UUIDStub(self.stack_id): stack.store() - self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'get_status') - aws_wch.WaitConditionHandle.get_status().AndReturn([]) - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS']) - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', - 'SUCCESS']) - aws_wch.WaitConditionHandle.get_status().AndReturn(['SUCCESS', - 'SUCCESS', - 'SUCCESS']) + with mock.patch.object(aws_wch.WaitConditionHandle, + 'get_status') as m_gs: + m_gs.side_effect = [ + [], + ['SUCCESS'], + ['SUCCESS', 'SUCCESS'], + ['SUCCESS', 'SUCCESS', 'SUCCESS'] + ] + stack.create() + self.assertEqual(4, m_gs.call_count) + + rsrc = stack['WaitForTheHandle'] + self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) return stack @@ -588,14 +543,7 @@ def test_update(self): self.stack = self.create_stack() - self.m.ReplayAll() - self.stack.create() - rsrc = self.stack['WaitForTheHandle'] - self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - - self.m.VerifyAll() - self.m.UnsetStubs() wait_condition_handle = self.stack['WaitHandle'] test_metadata = {'Data': 'foo', 'Reason': 'bar', @@ -615,14 +563,7 @@ def test_update_restored_from_db(self): self.stack = self.create_stack() - self.m.ReplayAll() - self.stack.create() - rsrc = self.stack['WaitForTheHandle'] - self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - - self.m.VerifyAll() - self.m.UnsetStubs() handle_stack = self.stack wait_condition_handle = handle_stack['WaitHandle'] @@ -659,14 +600,7 @@ def test_update_timeout(self): self.stack = self.create_stack() - self.m.ReplayAll() - self.stack.create() - rsrc = self.stack['WaitForTheHandle'] - self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - - self.m.VerifyAll() - self.m.UnsetStubs() now = timeutils.utcnow() fake_clock = [now + datetime.timedelta(0, t) @@ -674,10 +608,8 @@ timeutils.set_time_override(fake_clock) self.addCleanup(timeutils.clear_time_override) - self.m.StubOutWithMock(aws_wch.WaitConditionHandle, 'get_status') - aws_wch.WaitConditionHandle.get_status().MultipleTimes().AndReturn([]) - - self.m.ReplayAll() + m_gs = self.patchobject( + aws_wch.WaitConditionHandle, 'get_status', return_value=[]) uprops = copy.copy(rsrc.properties.data) uprops['Count'] = '5' @@ -691,4 +623,4 @@ self.assertEqual("WaitConditionTimeout: resources.WaitForTheHandle: " "0 of 5 received", six.text_type(ex)) self.assertEqual(5, rsrc.properties['Count']) - self.m.VerifyAll() + self.assertEqual(2, m_gs.call_count) diff -Nru heat-11.0.0~b1/heat/tests/clients/test_designate_client.py heat-11.0.0~b2/heat/tests/clients/test_designate_client.py --- heat-11.0.0~b1/heat/tests/clients/test_designate_client.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/clients/test_designate_client.py 2018-06-07 22:12:28.000000000 +0000 @@ -285,6 +285,24 @@ @mock.patch.object(client.DesignateClientPlugin, 'client') @mock.patch('designateclient.v1.records.Record') + def test_record_delete_domain_not_found(self, mock_record, + client_designate): + self._client.records.delete.return_value = None + self.client_plugin.get_domain_id.side_effect = ( + heat_exception.EntityNotFound) + client_designate.return_value = self._client + + record = dict( + id=self.sample_uuid, + domain=self.sample_domain_id + ) + + self.client_plugin.record_delete(**record) + + self.assertFalse(self._client.records.delete.called) + + @mock.patch.object(client.DesignateClientPlugin, 'client') + @mock.patch('designateclient.v1.records.Record') def test_record_show(self, mock_record, client_designate): self._client.records.get.return_value = None client_designate.return_value = self._client diff -Nru heat-11.0.0~b1/heat/tests/convergence/framework/fake_resource.py heat-11.0.0~b2/heat/tests/convergence/framework/fake_resource.py --- heat-11.0.0~b1/heat/tests/convergence/framework/fake_resource.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/convergence/framework/fake_resource.py 2018-06-07 22:12:28.000000000 +0000 @@ -24,9 +24,9 @@ class TestResource(resource.Resource): PROPERTIES = ( - A, C, CA, rA, rB + A, B, C, CA, rA, rB ) = ( - 'a', 'c', 'ca', '!a', '!b' + 'a', 'b', 'c', 'ca', '!a', '!b' ) ATTRIBUTES = ( @@ -42,6 +42,12 @@ default='a', update_allowed=True ), + B: properties.Schema( + properties.Schema.STRING, + _('Fake property b.'), + default='b', + update_allowed=True + ), C: properties.Schema( properties.Schema.STRING, _('Fake property c.'), diff -Nru heat-11.0.0~b1/heat/tests/convergence/scenarios/update_user_replace_rollback_update.py heat-11.0.0~b2/heat/tests/convergence/scenarios/update_user_replace_rollback_update.py --- heat-11.0.0~b1/heat/tests/convergence/scenarios/update_user_replace_rollback_update.py 1970-01-01 00:00:00.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/convergence/scenarios/update_user_replace_rollback_update.py 2018-06-07 22:12:28.000000000 +0000 @@ -0,0 +1,54 @@ +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +example_template = Template({ + 'A': RsrcDef({'a': 'initial'}, []), + 'B': RsrcDef({}, []), + 'C': RsrcDef({'!a': GetAtt('A', 'a'), 'b': 'val1'}, []), + 'D': RsrcDef({'c': GetRes('C')}, []), + 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), +}) +engine.create_stack('foo', example_template) +engine.noop(5) +engine.call(verify, example_template) + +example_template_updated = Template({ + 'A': RsrcDef({'a': 'updated'}, []), + 'B': RsrcDef({}, []), + 'C': RsrcDef({'!a': GetAtt('A', 'a'), 'b': 'val1'}, []), + 'D': RsrcDef({'c': GetRes('C')}, []), + 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), +}) +engine.update_stack('foo', example_template_updated) +engine.noop(3) + +engine.rollback_stack('foo') +engine.noop(12) +engine.call(verify, example_template) + +example_template_final = Template({ + 'A': RsrcDef({'a': 'initial'}, []), + 'B': RsrcDef({}, []), + 'C': RsrcDef({'!a': GetAtt('A', 'a'), 'b': 'val2'}, []), + 'D': RsrcDef({'c': GetRes('C')}, []), + 'E': RsrcDef({'ca': GetAtt('C', '!a')}, []), +}) + +engine.update_stack('foo', example_template_final) +engine.noop(3) +engine.call(verify, example_template_final) +engine.noop(4) + +engine.delete_stack('foo') +engine.noop(6) +engine.call(verify, Template({})) diff -Nru heat-11.0.0~b1/heat/tests/db/test_sqlalchemy_api.py heat-11.0.0~b2/heat/tests/db/test_sqlalchemy_api.py --- heat-11.0.0~b1/heat/tests/db/test_sqlalchemy_api.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/db/test_sqlalchemy_api.py 2018-06-07 22:12:28.000000000 +0000 @@ -1383,6 +1383,7 @@ 'parameters': {}, 'user_creds_id': user_creds['id'], 'owner_id': None, + 'backup': False, 'timeout': '60', 'disable_rollback': 0, 'current_traversal': 'dummy-uuid', @@ -2516,6 +2517,19 @@ self.assertRaises(exception.NotFound, db_api.resource_get, self.ctx, resource.id) + @mock.patch.object(time, 'sleep') + def test_resource_purge_deleted_by_stack_retry_on_deadlock(self, m_sleep): + val = {'name': 'res1', 'action': rsrc.Resource.DELETE, + 'status': rsrc.Resource.COMPLETE} + create_resource(self.ctx, self.stack, **val) + + with mock.patch('sqlalchemy.orm.query.Query.delete', + side_effect=db_exception.DBDeadlock) as mock_delete: + self.assertRaises(db_exception.DBDeadlock, + db_api.resource_purge_deleted, + self.ctx, self.stack.id) + self.assertEqual(4, mock_delete.call_count) + def test_engine_get_all_locked_by_stack(self): values = [ {'name': 'res1', 'action': rsrc.Resource.DELETE, diff -Nru heat-11.0.0~b1/heat/tests/engine/service/test_service_engine.py heat-11.0.0~b2/heat/tests/engine/service/test_service_engine.py --- heat-11.0.0~b1/heat/tests/engine/service/test_service_engine.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/engine/service/test_service_engine.py 2018-06-07 22:12:28.000000000 +0000 @@ -299,6 +299,7 @@ cfg.CONF.set_default('periodic_interval', 60) self.patchobject(self.eng, 'service_manage_cleanup') self.patchobject(self.eng, 'reset_stack_status') + self.patchobject(self.eng, 'service_manage_report') self.eng.start() # Add dummy thread group to test thread_group_mgr.stop() is executed? diff -Nru heat-11.0.0~b1/heat/tests/engine/service/test_software_config.py heat-11.0.0~b2/heat/tests/engine/service/test_software_config.py --- heat-11.0.0~b1/heat/tests/engine/service/test_software_config.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/engine/service/test_software_config.py 2018-06-07 22:12:28.000000000 +0000 @@ -211,8 +211,7 @@ t = template_format.parse(tools.wp_template) stack = utils.parse_stack(t, stack_name=stack_name) - tools.setup_mocks(self.m, stack) - self.m.ReplayAll() + fc = tools.setup_mocks_with_mock(self, stack) stack.store() stack.create() server = stack['WebServer'] @@ -241,14 +240,14 @@ self.ctx, server_id) self.assertEqual(deployment['config_id'], rsrcs[0].rsrc_metadata.get('deployments')[0]['id']) + tools.validate_setup_mocks_with_mock(stack, fc) def test_metadata_software_deployments(self): stack_name = 'test_metadata_software_deployments' t = template_format.parse(tools.wp_template) stack = utils.parse_stack(t, stack_name=stack_name) - tools.setup_mocks(self.m, stack) - self.m.ReplayAll() + fc = tools.setup_mocks_with_mock(self, stack) stack.store() stack.create() server = stack['WebServer'] @@ -314,6 +313,7 @@ metadata = self.engine.metadata_software_deployments( self.ctx, server_id=server_id) self.assertEqual(2, len(metadata)) + tools.validate_setup_mocks_with_mock(stack, fc) def test_show_software_deployment(self): deployment_id = str(uuid.uuid4()) diff -Nru heat-11.0.0~b1/heat/tests/engine/service/test_stack_resources.py heat-11.0.0~b2/heat/tests/engine/service/test_stack_resources.py --- heat-11.0.0~b1/heat/tests/engine/service/test_stack_resources.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/engine/service/test_stack_resources.py 2018-06-07 22:12:28.000000000 +0000 @@ -295,10 +295,11 @@ @mock.patch.object(stack.Stack, 'load') def test_stack_resources_list_deleted_stack(self, mock_load): - stk = tools.setup_stack('resource_list_deleted_stack', self.ctx) + stk = tools.setup_stack_with_mock(self, 'resource_list_deleted_stack', + self.ctx) stack_id = stk.identifier() mock_load.return_value = stk - tools.clean_up_stack(stk) + tools.clean_up_stack(self, stk) resources = self.eng.list_stack_resources(self.ctx, stack_id) self.assertEqual(0, len(resources)) diff -Nru heat-11.0.0~b1/heat/tests/engine/test_check_resource.py heat-11.0.0~b2/heat/tests/engine/test_check_resource.py --- heat-11.0.0~b1/heat/tests/engine/test_check_resource.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/engine/test_check_resource.py 2018-06-07 22:12:28.000000000 +0000 @@ -15,6 +15,7 @@ import eventlet import mock +import uuid from oslo_config import cfg @@ -90,7 +91,7 @@ self.is_update, None) mock_cru.assert_called_once_with(self.resource, self.resource.stack.t.id, - {}, self.worker.engine_id, + set(), self.worker.engine_id, mock.ANY, mock.ANY) self.assertFalse(mock_crc.called) @@ -121,18 +122,18 @@ self.is_update, None) mock_cru.assert_called_once_with(self.resource, self.resource.stack.t.id, - {}, self.worker.engine_id, + set(), self.worker.engine_id, mock.ANY, mock.ANY) self.assertTrue(mock_mr.called) self.assertFalse(mock_crc.called) self.assertFalse(mock_pcr.called) self.assertFalse(mock_csc.called) - @mock.patch.object(check_resource.CheckResource, '_try_steal_engine_lock') + @mock.patch.object(check_resource.CheckResource, + '_stale_resource_needs_retry') @mock.patch.object(stack.Stack, 'time_remaining') - @mock.patch.object(resource.Resource, 'state_set') def test_is_update_traversal_raise_update_inprogress( - self, mock_ss, tr, mock_tsl, mock_cru, mock_crc, mock_pcr, + self, tr, mock_tsl, mock_cru, mock_crc, mock_pcr, mock_csc): mock_cru.side_effect = exception.UpdateInProgress self.worker.engine_id = 'some-thing-else' @@ -143,45 +144,66 @@ self.is_update, None) mock_cru.assert_called_once_with(self.resource, self.resource.stack.t.id, - {}, self.worker.engine_id, + set(), self.worker.engine_id, mock.ANY, mock.ANY) - mock_ss.assert_called_once_with(self.resource.action, - resource.Resource.FAILED, - mock.ANY) self.assertFalse(mock_crc.called) self.assertFalse(mock_pcr.called) self.assertFalse(mock_csc.called) + @mock.patch.object(resource.Resource, 'state_set') + def test_stale_resource_retry( + self, mock_ss, mock_cru, mock_crc, mock_pcr, mock_csc): + current_template_id = self.resource.current_template_id + res = self.cr._stale_resource_needs_retry(self.ctx, + self.resource, + current_template_id) + self.assertTrue(res) + mock_ss.assert_not_called() + + @mock.patch.object(resource.Resource, 'state_set') def test_try_steal_lock_alive( - self, mock_cru, mock_crc, mock_pcr, mock_csc): - res = self.cr._try_steal_engine_lock(self.ctx, - self.resource.id) + self, mock_ss, mock_cru, mock_crc, mock_pcr, mock_csc): + res = self.cr._stale_resource_needs_retry(self.ctx, + self.resource, + str(uuid.uuid4())) self.assertFalse(res) + mock_ss.assert_not_called() @mock.patch.object(check_resource.listener_client, 'EngineListenerClient') @mock.patch.object(check_resource.resource_objects.Resource, 'get_obj') + @mock.patch.object(resource.Resource, 'state_set') def test_try_steal_lock_dead( - self, mock_get, mock_elc, mock_cru, mock_crc, mock_pcr, + self, mock_ss, mock_get, mock_elc, mock_cru, mock_crc, mock_pcr, mock_csc): fake_res = mock.Mock() fake_res.engine_id = 'some-thing-else' mock_get.return_value = fake_res mock_elc.return_value.is_alive.return_value = False - res = self.cr._try_steal_engine_lock(self.ctx, - self.resource.id) + current_template_id = self.resource.current_template_id + res = self.cr._stale_resource_needs_retry(self.ctx, + self.resource, + current_template_id) self.assertTrue(res) + mock_ss.assert_called_once_with(self.resource.action, + resource.Resource.FAILED, + mock.ANY) @mock.patch.object(check_resource.listener_client, 'EngineListenerClient') @mock.patch.object(check_resource.resource_objects.Resource, 'get_obj') + @mock.patch.object(resource.Resource, 'state_set') def test_try_steal_lock_not_dead( - self, mock_get, mock_elc, mock_cru, mock_crc, mock_pcr, + self, mock_ss, mock_get, mock_elc, mock_cru, mock_crc, mock_pcr, mock_csc): fake_res = mock.Mock() fake_res.engine_id = self.worker.engine_id mock_get.return_value = fake_res mock_elc.return_value.is_alive.return_value = True - res = self.cr._try_steal_engine_lock(self.ctx, self.resource.id) + current_template_id = self.resource.current_template_id + res = self.cr._stale_resource_needs_retry(self.ctx, + self.resource, + current_template_id) self.assertFalse(res) + mock_ss.assert_not_called() @mock.patch.object(check_resource.CheckResource, '_trigger_rollback') def test_resource_update_failure_sets_stack_state_as_failed( @@ -346,7 +368,8 @@ # lets say C is update-replaced is_update = True trav_id = self.stack.current_traversal - replacementC_id = resC.make_replacement(self.stack.t.id) + replacementC_id = resC.make_replacement(self.stack.t.id, + set(resC.requires)) replacementC, stack, _ = resource.Resource.load(self.ctx, replacementC_id, trav_id, @@ -523,6 +546,20 @@ self.assertFalse(mock_pcr.called) self.assertFalse(mock_csc.called) + @mock.patch.object(resource.Resource, 'load') + def test_requires(self, mock_load, mock_cru, mock_crc, mock_pcr, mock_csc): + mock_load.return_value = self.resource, self.stack, self.stack + res_data = {(1, True): {u'id': 5, u'name': 'A', 'attrs': {}}, + (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} + self.worker.check_resource(self.ctx, self.resource.id, + self.stack.current_traversal, + sync_point.serialize_input_data(res_data), + self.is_update, {}) + mock_cru.assert_called_once_with( + self.resource, self.resource.stack.t.id, + {5, 3}, self.worker.engine_id, + self.stack, mock.ANY) + @mock.patch.object(check_resource, 'check_stack_complete') @mock.patch.object(check_resource, 'propagate_check_resource') @@ -563,7 +600,7 @@ self.assertFalse(mock_cru.called) mock_crc.assert_called_once_with( self.resource, self.resource.stack.t.id, - {}, self.worker.engine_id, + self.worker.engine_id, tr(), mock.ANY) @mock.patch.object(stack.Stack, 'time_remaining') @@ -576,7 +613,7 @@ self.is_update, None) mock_crc.assert_called_once_with(self.resource, self.resource.stack.t.id, - {}, self.worker.engine_id, + self.worker.engine_id, tr(), mock.ANY) self.assertFalse(mock_cru.called) self.assertFalse(mock_pcr.called) @@ -677,7 +714,7 @@ def test_check_resource_update_init_action(self, mock_update, mock_create): self.resource.action = 'INIT' check_resource.check_resource_update( - self.resource, self.resource.stack.t.id, {}, 'engine-id', + self.resource, self.resource.stack.t.id, set(), 'engine-id', self.stack, None) self.assertTrue(mock_create.called) self.assertFalse(mock_update.called) @@ -688,7 +725,7 @@ self, mock_update, mock_create): self.resource.action = 'CREATE' check_resource.check_resource_update( - self.resource, self.resource.stack.t.id, {}, 'engine-id', + self.resource, self.resource.stack.t.id, set(), 'engine-id', self.stack, None) self.assertFalse(mock_create.called) self.assertTrue(mock_update.called) @@ -699,7 +736,7 @@ self, mock_update, mock_create): self.resource.action = 'UPDATE' check_resource.check_resource_update( - self.resource, self.resource.stack.t.id, {}, 'engine-id', + self.resource, self.resource.stack.t.id, set(), 'engine-id', self.stack, None) self.assertFalse(mock_create.called) self.assertTrue(mock_update.called) @@ -708,7 +745,7 @@ def test_check_resource_cleanup_delete(self, mock_delete): self.resource.current_template_id = 'new-template-id' check_resource.check_resource_cleanup( - self.resource, self.resource.stack.t.id, {}, 'engine-id', + self.resource, self.resource.stack.t.id, 'engine-id', self.stack.timeout_secs(), None) self.assertTrue(mock_delete.called) diff -Nru heat-11.0.0~b1/heat/tests/engine/tools.py heat-11.0.0~b2/heat/tests/engine/tools.py --- heat-11.0.0~b1/heat/tests/engine/tools.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/engine/tools.py 2018-06-07 22:12:28.000000000 +0000 @@ -12,7 +12,6 @@ import sys -import mox import six from heat.common import template_format @@ -168,48 +167,39 @@ return stack -def setup_keystone_mocks(mocks, stack): +def setup_keystone_mocks_with_mock(test_case, stack): fkc = fake_ks.FakeKeystoneClient() - mocks.StubOutWithMock(keystone.KeystoneClientPlugin, '_create') - keystone.KeystoneClientPlugin._create().AndReturn(fkc) + test_case.patchobject(keystone.KeystoneClientPlugin, '_create') + keystone.KeystoneClientPlugin._create.return_value = fkc -def setup_mock_for_image_constraint(mocks, imageId_input, - imageId_output=744): - mocks.StubOutWithMock(glance.GlanceClientPlugin, - 'find_image_by_name_or_id') - glance.GlanceClientPlugin.find_image_by_name_or_id( - imageId_input).MultipleTimes().AndReturn(imageId_output) +def setup_mock_for_image_constraint_with_mock(test_case, imageId_input, + imageId_output=744): + test_case.patchobject(glance.GlanceClientPlugin, + 'find_image_by_name_or_id', + return_value=imageId_output) -def setup_mocks(mocks, stack, mock_image_constraint=True, - mock_keystone=True): - fc = fakes_nova.FakeClient() - mocks.StubOutWithMock(instances.Instance, 'client') - instances.Instance.client().MultipleTimes().AndReturn(fc) - mocks.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().AndReturn(fc) +def validate_setup_mocks_with_mock(stack, fc, mock_image_constraint=True, + validate_create=True): instance = stack['WebServer'] metadata = instance.metadata_get() if mock_image_constraint: - setup_mock_for_image_constraint(mocks, - instance.properties['ImageId']) - - if mock_keystone: - setup_keystone_mocks(mocks, stack) + m_image = glance.GlanceClientPlugin.find_image_by_name_or_id + m_image.assert_called_with( + instance.properties['ImageId']) user_data = instance.properties['UserData'] server_userdata = instance.client_plugin().build_userdata( metadata, user_data, 'ec2-user') - mocks.StubOutWithMock(nova.NovaClientPlugin, 'build_userdata') - nova.NovaClientPlugin.build_userdata( - metadata, - user_data, - 'ec2-user').AndReturn(server_userdata) + nova.NovaClientPlugin.build_userdata.assert_called_with( + metadata, user_data, 'ec2-user') + + if not validate_create: + return - mocks.StubOutWithMock(fc.servers, 'create') - fc.servers.create( + fc.servers.create.assert_called_once_with( image=744, flavor=3, key_name='test', @@ -220,36 +210,54 @@ meta=None, nics=None, availability_zone=None, - block_device_mapping=None).AndReturn(fc.servers.list()[4]) + block_device_mapping=None) + + +def setup_mocks_with_mock(testcase, stack, mock_image_constraint=True, + mock_keystone=True): + fc = fakes_nova.FakeClient() + testcase.patchobject(instances.Instance, 'client', return_value=fc) + testcase.patchobject(nova.NovaClientPlugin, '_create', return_value=fc) + instance = stack['WebServer'] + metadata = instance.metadata_get() + if mock_image_constraint: + setup_mock_for_image_constraint_with_mock( + testcase, instance.properties['ImageId']) + + if mock_keystone: + setup_keystone_mocks_with_mock(testcase, stack) + + user_data = instance.properties['UserData'] + server_userdata = instance.client_plugin().build_userdata( + metadata, user_data, 'ec2-user') + testcase.patchobject(nova.NovaClientPlugin, 'build_userdata', + return_value=server_userdata) + + testcase.patchobject(fc.servers, 'create') + + fc.servers.create.return_value = fc.servers.list()[4] return fc -def setup_stack(stack_name, ctx, create_res=True, convergence=False): +def setup_stack_with_mock(test_case, stack_name, ctx, create_res=True, + convergence=False): stack = get_stack(stack_name, ctx, convergence=convergence) stack.store() if create_res: - m = mox.Mox() - setup_mocks(m, stack) - m.ReplayAll() + fc = setup_mocks_with_mock(test_case, stack) stack.create() stack._persist_state() - m.UnsetStubs() + validate_setup_mocks_with_mock(stack, fc) return stack -def clean_up_stack(stack, delete_res=True): +def clean_up_stack(test_case, stack, delete_res=True): if delete_res: - m = mox.Mox() fc = fakes_nova.FakeClient() - m.StubOutWithMock(instances.Instance, 'client') - instances.Instance.client().MultipleTimes().AndReturn(fc) - m.StubOutWithMock(fc.servers, 'delete') - fc.servers.delete(mox.IgnoreArg()).AndRaise( - fakes_nova.fake_exception()) - m.ReplayAll() + test_case.patchobject(instances.Instance, 'client', return_value=fc) + test_case.patchobject(fc.servers, 'delete', + side_effect=fakes_nova.fake_exception()) stack.delete() - if delete_res: - m.UnsetStubs() def stack_context(stack_name, create_res=True, convergence=False): @@ -265,14 +273,14 @@ def create_stack(): ctx = getattr(test_case, 'ctx', None) if ctx is not None: - stack = setup_stack(stack_name, ctx, - create_res, convergence) + stack = setup_stack_with_mock(test_case, stack_name, ctx, + create_res, convergence) setattr(test_case, 'stack', stack) def delete_stack(): stack = getattr(test_case, 'stack', None) if stack is not None and stack.id is not None: - clean_up_stack(stack, delete_res=create_res) + clean_up_stack(test_case, stack, delete_res=create_res) create_stack() try: diff -Nru heat-11.0.0~b1/heat/tests/openstack/neutron/test_neutron_firewall.py heat-11.0.0~b2/heat/tests/openstack/neutron/test_neutron_firewall.py --- heat-11.0.0~b1/heat/tests/openstack/neutron/test_neutron_firewall.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/openstack/neutron/test_neutron_firewall.py 2018-06-07 22:12:28.000000000 +0000 @@ -79,29 +79,18 @@ def setUp(self): super(FirewallTest, self).setUp() - self.m.StubOutWithMock(neutronclient.Client, 'create_firewall') - self.m.StubOutWithMock(neutronclient.Client, 'delete_firewall') - self.m.StubOutWithMock(neutronclient.Client, 'show_firewall') - self.m.StubOutWithMock(neutronclient.Client, 'update_firewall') + self.mockclient = mock.Mock(spec=neutronclient.Client) + self.patchobject(neutronclient, 'Client', return_value=self.mockclient) self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_firewall(self, value_specs=True): snippet = template_format.parse(firewall_template) + self.mockclient.create_firewall.return_value = { + 'firewall': {'id': '5678'} + } if not value_specs: del snippet['resources']['firewall']['properties']['value_specs'] - neutronclient.Client.create_firewall({ - 'firewall': { - 'name': 'test-firewall', 'admin_state_up': True, - 'firewall_policy_id': 'policy-id', 'shared': True}} - ).AndReturn({'firewall': {'id': '5678'}}) - else: - neutronclient.Client.create_firewall({ - 'firewall': { - 'name': 'test-firewall', 'admin_state_up': True, - 'router_ids': ['router_1', 'router_2'], - 'firewall_policy_id': 'policy-id', 'shared': True}} - ).AndReturn({'firewall': {'id': '5678'}}) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) @@ -111,21 +100,28 @@ def test_create(self): rsrc = self.create_firewall() - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'ACTIVE'}}) - self.m.ReplayAll() + self.mockclient.show_firewall.return_value = { + 'firewall': {'status': 'ACTIVE'} + } scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'router_ids': ['router_1', 'router_2'], + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.show_firewall.assert_called_once_with('5678') def test_create_failed_error_status(self): cfg.CONF.set_override('action_retry_limit', 0) rsrc = self.create_firewall() - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'PENDING_CREATE'}}) - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'ERROR'}}) - self.m.ReplayAll() + self.mockclient.show_firewall.side_effect = [ + {'firewall': {'status': 'PENDING_CREATE'}}, + {'firewall': {'status': 'ERROR'}}, + ] error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.create)) @@ -134,16 +130,19 @@ 'Went to status ERROR due to "Error in Firewall"', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) - self.m.VerifyAll() - def test_create_failed(self): - neutronclient.Client.create_firewall({ + self.mockclient.create_firewall.assert_called_once_with({ 'firewall': { 'name': 'test-firewall', 'admin_state_up': True, 'router_ids': ['router_1', 'router_2'], - 'firewall_policy_id': 'policy-id', 'shared': True}} - ).AndRaise(exceptions.NeutronClientException()) - self.m.ReplayAll() + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.show_firewall.assert_called_with('5678') + + def test_create_failed(self): + self.mockclient.create_firewall.side_effect = ( + exceptions.NeutronClientException()) snippet = template_format.parse(firewall_template) stack = utils.parse_stack(snippet) @@ -158,44 +157,67 @@ 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'router_ids': ['router_1', 'router_2'], + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) def test_delete(self): rsrc = self.create_firewall() - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'ACTIVE'}}) + self.mockclient.show_firewall.side_effect = [ + {'firewall': {'status': 'ACTIVE'}}, + exceptions.NeutronClientException(status_code=404), + ] + self.mockclient.delete_firewall.return_value = None - neutronclient.Client.delete_firewall('5678') - neutronclient.Client.show_firewall('5678').AndRaise( - exceptions.NeutronClientException(status_code=404)) - - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'router_ids': ['router_1', 'router_2'], + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.delete_firewall.assert_called_once_with('5678') + self.mockclient.show_firewall.assert_called_with('5678') def test_delete_already_gone(self): - neutronclient.Client.delete_firewall('5678').AndRaise( + rsrc = self.create_firewall() + self.mockclient.show_firewall.return_value = { + 'firewall': {'status': 'ACTIVE'} + } + self.mockclient.delete_firewall.side_effect = ( exceptions.NeutronClientException(status_code=404)) - rsrc = self.create_firewall() - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'ACTIVE'}}) - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'router_ids': ['router_1', 'router_2'], + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.delete_firewall.assert_called_once_with('5678') + self.mockclient.show_firewall.assert_called_once_with('5678') def test_delete_failed(self): - neutronclient.Client.delete_firewall('5678').AndRaise( + rsrc = self.create_firewall() + self.mockclient.show_firewall.return_value = { + 'firewall': {'status': 'ACTIVE'} + } + self.mockclient.delete_firewall.side_effect = ( exceptions.NeutronClientException(status_code=400)) - rsrc = self.create_firewall() - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'ACTIVE'}}) - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) @@ -204,45 +226,72 @@ 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'router_ids': ['router_1', 'router_2'], + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.delete_firewall.assert_called_once_with('5678') + self.mockclient.show_firewall.assert_called_once_with('5678') def test_attribute(self): rsrc = self.create_firewall() - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'ACTIVE'}}) - neutronclient.Client.show_firewall('5678').MultipleTimes( - ).AndReturn( - {'firewall': {'admin_state_up': True, - 'firewall_policy_id': 'policy-id', - 'shared': True}}) - self.m.ReplayAll() + self.mockclient.show_firewall.return_value = { + 'firewall': { + 'status': 'ACTIVE', + 'admin_state_up': True, + 'firewall_policy_id': 'policy-id', + 'shared': True, + } + } + scheduler.TaskRunner(rsrc.create)() self.assertIs(True, rsrc.FnGetAtt('admin_state_up')) self.assertEqual('This attribute is currently unsupported in neutron ' 'firewall resource.', rsrc.FnGetAtt('shared')) self.assertEqual('policy-id', rsrc.FnGetAtt('firewall_policy_id')) - self.m.VerifyAll() + + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'router_ids': ['router_1', 'router_2'], + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.show_firewall.assert_called_with('5678') def test_attribute_failed(self): rsrc = self.create_firewall() - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'ACTIVE'}}) - self.m.ReplayAll() + self.mockclient.show_firewall.return_value = { + 'firewall': {'status': 'ACTIVE'} + } + scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'subnet_id') self.assertEqual( 'The Referenced Attribute (firewall subnet_id) is ' 'incorrect.', six.text_type(error)) - self.m.VerifyAll() + + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'router_ids': ['router_1', 'router_2'], + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.show_firewall.assert_called_once_with('5678') def test_update(self): rsrc = self.create_firewall() - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'ACTIVE'}}) - neutronclient.Client.update_firewall( - '5678', {'firewall': {'admin_state_up': False}}) - self.m.ReplayAll() + self.mockclient.show_firewall.return_value = { + 'firewall': {'status': 'ACTIVE'} + } + self.mockclient.update_firewall.return_value = None + scheduler.TaskRunner(rsrc.create)() props = self.fw_props.copy() @@ -250,16 +299,23 @@ update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() - self.m.VerifyAll() + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'router_ids': ['router_1', 'router_2'], + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.show_firewall.assert_called_once_with('5678') + self.mockclient.update_firewall.assert_called_once_with( + '5678', {'firewall': {'admin_state_up': False}}) def test_update_with_value_specs(self): rsrc = self.create_firewall(value_specs=False) - neutronclient.Client.show_firewall('5678').AndReturn( - {'firewall': {'status': 'ACTIVE'}}) - neutronclient.Client.update_firewall( - '5678', {'firewall': {'router_ids': ['router_1', - 'router_2']}}) - self.m.ReplayAll() + self.mockclient.show_firewall.return_value = { + 'firewall': {'status': 'ACTIVE'} + } + scheduler.TaskRunner(rsrc.create)() prop_diff = { 'value_specs': { @@ -270,11 +326,21 @@ rsrc.type(), prop_diff) rsrc.handle_update(update_snippet, {}, prop_diff) - self.m.VerifyAll() + + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.show_firewall.assert_called_once_with('5678') + self.mockclient.update_firewall.assert_called_once_with( + '5678', {'firewall': {'router_ids': ['router_1', + 'router_2']}}) def test_get_live_state(self): rsrc = self.create_firewall(value_specs=True) - rsrc.client().show_firewall = mock.Mock(return_value={ + self.mockclient.show_firewall.return_value = { 'firewall': { 'status': 'ACTIVE', 'router_ids': ['router_1', 'router_2'], @@ -285,8 +351,8 @@ 'id': '11425cd4-41b6-4fd4-97aa-17629c63de61', 'description': '' } - }) - self.m.ReplayAll() + } + scheduler.TaskRunner(rsrc.create)() reality = rsrc.get_live_state(rsrc.properties) @@ -301,26 +367,30 @@ } self.assertEqual(expected, reality) - self.m.VerifyAll() + + self.mockclient.create_firewall.assert_called_once_with({ + 'firewall': { + 'name': 'test-firewall', 'admin_state_up': True, + 'router_ids': ['router_1', 'router_2'], + 'firewall_policy_id': 'policy-id', 'shared': True + } + }) + self.mockclient.show_firewall.assert_called_with('5678') class FirewallPolicyTest(common.HeatTestCase): def setUp(self): super(FirewallPolicyTest, self).setUp() - self.m.StubOutWithMock(neutronclient.Client, 'create_firewall_policy') - self.m.StubOutWithMock(neutronclient.Client, 'delete_firewall_policy') - self.m.StubOutWithMock(neutronclient.Client, 'show_firewall_policy') - self.m.StubOutWithMock(neutronclient.Client, 'update_firewall_policy') + self.mockclient = mock.Mock(spec=neutronclient.Client) + self.patchobject(neutronclient, 'Client', return_value=self.mockclient) self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_firewall_policy(self): - neutronclient.Client.create_firewall_policy({ - 'firewall_policy': { - 'name': 'test-firewall-policy', 'shared': True, - 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2']}} - ).AndReturn({'firewall_policy': {'id': '5678'}}) + self.mockclient.create_firewall_policy.return_value = { + 'firewall_policy': {'id': '5678'} + } snippet = template_format.parse(firewall_policy_template) self.stack = utils.parse_stack(snippet) @@ -331,18 +401,20 @@ def test_create(self): rsrc = self.create_firewall_policy() - self.m.ReplayAll() + scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() - def test_create_failed(self): - neutronclient.Client.create_firewall_policy({ + self.mockclient.create_firewall_policy.assert_called_once_with({ 'firewall_policy': { 'name': 'test-firewall-policy', 'shared': True, - 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2']}} - ).AndRaise(exceptions.NeutronClientException()) - self.m.ReplayAll() + 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2'] + } + }) + + def test_create_failed(self): + self.mockclient.create_firewall_policy.side_effect = ( + exceptions.NeutronClientException()) snippet = template_format.parse(firewall_policy_template) stack = utils.parse_stack(snippet) @@ -357,37 +429,56 @@ 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall_policy.assert_called_once_with({ + 'firewall_policy': { + 'name': 'test-firewall-policy', 'shared': True, + 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2'] + } + }) def test_delete(self): - neutronclient.Client.delete_firewall_policy('5678') - neutronclient.Client.show_firewall_policy('5678').AndRaise( + rsrc = self.create_firewall_policy() + self.mockclient.delete_firewall_policy.return_value = None + self.mockclient.show_firewall_policy.side_effect = ( exceptions.NeutronClientException(status_code=404)) - rsrc = self.create_firewall_policy() - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall_policy.assert_called_once_with({ + 'firewall_policy': { + 'name': 'test-firewall-policy', 'shared': True, + 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2'] + } + }) + self.mockclient.delete_firewall_policy.assert_called_once_with('5678') + self.mockclient.show_firewall_policy.assert_called_once_with('5678') def test_delete_already_gone(self): - neutronclient.Client.delete_firewall_policy('5678').AndRaise( + rsrc = self.create_firewall_policy() + self.mockclient.delete_firewall_policy.side_effect = ( exceptions.NeutronClientException(status_code=404)) - rsrc = self.create_firewall_policy() - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall_policy.assert_called_once_with({ + 'firewall_policy': { + 'name': 'test-firewall-policy', 'shared': True, + 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2'] + } + }) + self.mockclient.delete_firewall_policy.assert_called_once_with('5678') + self.mockclient.show_firewall_policy.assert_not_called() def test_delete_failed(self): - neutronclient.Client.delete_firewall_policy('5678').AndRaise( + rsrc = self.create_firewall_policy() + self.mockclient.delete_firewall_policy.side_effect = ( exceptions.NeutronClientException(status_code=400)) - rsrc = self.create_firewall_policy() - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) @@ -396,35 +487,56 @@ 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall_policy.assert_called_once_with({ + 'firewall_policy': { + 'name': 'test-firewall-policy', 'shared': True, + 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2'] + } + }) + self.mockclient.delete_firewall_policy.assert_called_once_with('5678') + self.mockclient.show_firewall_policy.assert_not_called() def test_attribute(self): rsrc = self.create_firewall_policy() - neutronclient.Client.show_firewall_policy('5678').MultipleTimes( - ).AndReturn( - {'firewall_policy': {'audited': True, 'shared': True}}) - self.m.ReplayAll() + self.mockclient.show_firewall_policy.return_value = { + 'firewall_policy': {'audited': True, 'shared': True} + } + scheduler.TaskRunner(rsrc.create)() self.assertIs(True, rsrc.FnGetAtt('audited')) self.assertIs(True, rsrc.FnGetAtt('shared')) - self.m.VerifyAll() + + self.mockclient.create_firewall_policy.assert_called_once_with({ + 'firewall_policy': { + 'name': 'test-firewall-policy', 'shared': True, + 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2'] + } + }) + self.mockclient.show_firewall_policy.assert_called_with('5678') def test_attribute_failed(self): rsrc = self.create_firewall_policy() - self.m.ReplayAll() + scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'subnet_id') self.assertEqual( 'The Referenced Attribute (firewall_policy subnet_id) is ' 'incorrect.', six.text_type(error)) - self.m.VerifyAll() + + self.mockclient.create_firewall_policy.assert_called_once_with({ + 'firewall_policy': { + 'name': 'test-firewall-policy', 'shared': True, + 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2'] + } + }) + self.mockclient.show_firewall_policy.assert_not_called() def test_update(self): rsrc = self.create_firewall_policy() - neutronclient.Client.update_firewall_policy( - '5678', {'firewall_policy': {'firewall_rules': ['3', '4']}}) - self.m.ReplayAll() + self.mockclient.update_firewall_policy.return_value = None + scheduler.TaskRunner(rsrc.create)() props = self.tmpl['resources']['firewall_policy']['properties'].copy() @@ -432,27 +544,29 @@ update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() - self.m.VerifyAll() + self.mockclient.create_firewall_policy.assert_called_once_with({ + 'firewall_policy': { + 'name': 'test-firewall-policy', 'shared': True, + 'audited': True, 'firewall_rules': ['rule-id-1', 'rule-id-2'] + } + }) + self.mockclient.update_firewall_policy.assert_called_once_with( + '5678', {'firewall_policy': {'firewall_rules': ['3', '4']}}) class FirewallRuleTest(common.HeatTestCase): def setUp(self): super(FirewallRuleTest, self).setUp() - self.m.StubOutWithMock(neutronclient.Client, 'create_firewall_rule') - self.m.StubOutWithMock(neutronclient.Client, 'delete_firewall_rule') - self.m.StubOutWithMock(neutronclient.Client, 'show_firewall_rule') - self.m.StubOutWithMock(neutronclient.Client, 'update_firewall_rule') + self.mockclient = mock.Mock(spec=neutronclient.Client) + self.patchobject(neutronclient, 'Client', return_value=self.mockclient) self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) def create_firewall_rule(self): - neutronclient.Client.create_firewall_rule({ - 'firewall_rule': { - 'name': 'test-firewall-rule', 'shared': True, - 'action': 'allow', 'protocol': 'tcp', 'enabled': True, - 'ip_version': "4"}} - ).AndReturn({'firewall_rule': {'id': '5678'}}) + self.mockclient.create_firewall_rule.return_value = { + 'firewall_rule': {'id': '5678'} + } snippet = template_format.parse(firewall_rule_template) self.stack = utils.parse_stack(snippet) @@ -463,10 +577,17 @@ def test_create(self): rsrc = self.create_firewall_rule() - self.m.ReplayAll() + scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall_rule.assert_called_once_with({ + 'firewall_rule': { + 'name': 'test-firewall-rule', 'shared': True, + 'action': 'allow', 'protocol': 'tcp', 'enabled': True, + 'ip_version': "4" + } + }) def test_validate_failed_with_string_None_protocol(self): snippet = template_format.parse(firewall_rule_template) @@ -479,13 +600,9 @@ self.assertRaises(exception.StackValidationFailed, rsrc.validate) def test_create_with_protocol_any(self): - neutronclient.Client.create_firewall_rule({ - 'firewall_rule': { - 'name': 'test-firewall-rule', 'shared': True, - 'action': 'allow', 'protocol': None, 'enabled': True, - 'ip_version': "4"}} - ).AndReturn({'firewall_rule': {'id': '5678'}}) - self.m.ReplayAll() + self.mockclient.create_firewall_rule.return_value = { + 'firewall_rule': {'id': '5678'} + } snippet = template_format.parse(firewall_rule_template) snippet['resources']['firewall_rule']['properties']['protocol'] = 'any' @@ -494,16 +611,18 @@ scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() - def test_create_failed(self): - neutronclient.Client.create_firewall_rule({ + self.mockclient.create_firewall_rule.assert_called_once_with({ 'firewall_rule': { 'name': 'test-firewall-rule', 'shared': True, - 'action': 'allow', 'protocol': 'tcp', 'enabled': True, - 'ip_version': "4"}} - ).AndRaise(exceptions.NeutronClientException()) - self.m.ReplayAll() + 'action': 'allow', 'protocol': None, 'enabled': True, + 'ip_version': "4" + } + }) + + def test_create_failed(self): + self.mockclient.create_firewall_rule.side_effect = ( + exceptions.NeutronClientException()) snippet = template_format.parse(firewall_rule_template) stack = utils.parse_stack(snippet) @@ -518,37 +637,59 @@ 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall_rule.assert_called_once_with({ + 'firewall_rule': { + 'name': 'test-firewall-rule', 'shared': True, + 'action': 'allow', 'protocol': 'tcp', 'enabled': True, + 'ip_version': "4" + } + }) def test_delete(self): - neutronclient.Client.delete_firewall_rule('5678') - neutronclient.Client.show_firewall_rule('5678').AndRaise( + rsrc = self.create_firewall_rule() + self.mockclient.delete_firewall_rule.return_value = None + self.mockclient.show_firewall_rule.side_effect = ( exceptions.NeutronClientException(status_code=404)) - rsrc = self.create_firewall_rule() - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall_rule.assert_called_once_with({ + 'firewall_rule': { + 'name': 'test-firewall-rule', 'shared': True, + 'action': 'allow', 'protocol': 'tcp', 'enabled': True, + 'ip_version': "4" + } + }) + self.mockclient.delete_firewall_rule.assert_called_once_with('5678') + self.mockclient.show_firewall_rule.assert_called_once_with('5678') def test_delete_already_gone(self): - neutronclient.Client.delete_firewall_rule('5678').AndRaise( + rsrc = self.create_firewall_rule() + self.mockclient.delete_firewall_rule.side_effect = ( exceptions.NeutronClientException(status_code=404)) - rsrc = self.create_firewall_rule() - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall_rule.assert_called_once_with({ + 'firewall_rule': { + 'name': 'test-firewall-rule', 'shared': True, + 'action': 'allow', 'protocol': 'tcp', 'enabled': True, + 'ip_version': "4" + } + }) + self.mockclient.delete_firewall_rule.assert_called_once_with('5678') + self.mockclient.show_firewall_rule.assert_not_called() def test_delete_failed(self): - neutronclient.Client.delete_firewall_rule('5678').AndRaise( + rsrc = self.create_firewall_rule() + self.mockclient.delete_firewall_rule.side_effect = ( exceptions.NeutronClientException(status_code=400)) - rsrc = self.create_firewall_rule() - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(rsrc.delete)) @@ -557,35 +698,59 @@ 'An unknown exception occurred.', six.text_type(error)) self.assertEqual((rsrc.DELETE, rsrc.FAILED), rsrc.state) - self.m.VerifyAll() + + self.mockclient.create_firewall_rule.assert_called_once_with({ + 'firewall_rule': { + 'name': 'test-firewall-rule', 'shared': True, + 'action': 'allow', 'protocol': 'tcp', 'enabled': True, + 'ip_version': "4" + } + }) + self.mockclient.delete_firewall_rule.assert_called_once_with('5678') + self.mockclient.show_firewall_rule.assert_not_called() def test_attribute(self): rsrc = self.create_firewall_rule() - neutronclient.Client.show_firewall_rule('5678').MultipleTimes( - ).AndReturn( - {'firewall_rule': {'protocol': 'tcp', 'shared': True}}) - self.m.ReplayAll() + self.mockclient.show_firewall_rule.return_value = { + 'firewall_rule': {'protocol': 'tcp', 'shared': True} + } + scheduler.TaskRunner(rsrc.create)() self.assertEqual('tcp', rsrc.FnGetAtt('protocol')) self.assertIs(True, rsrc.FnGetAtt('shared')) - self.m.VerifyAll() + + self.mockclient.create_firewall_rule.assert_called_once_with({ + 'firewall_rule': { + 'name': 'test-firewall-rule', 'shared': True, + 'action': 'allow', 'protocol': 'tcp', 'enabled': True, + 'ip_version': "4" + } + }) + self.mockclient.show_firewall_rule.assert_called_with('5678') def test_attribute_failed(self): rsrc = self.create_firewall_rule() - self.m.ReplayAll() + scheduler.TaskRunner(rsrc.create)() error = self.assertRaises(exception.InvalidTemplateAttribute, rsrc.FnGetAtt, 'subnet_id') self.assertEqual( 'The Referenced Attribute (firewall_rule subnet_id) is ' 'incorrect.', six.text_type(error)) - self.m.VerifyAll() + + self.mockclient.create_firewall_rule.assert_called_once_with({ + 'firewall_rule': { + 'name': 'test-firewall-rule', 'shared': True, + 'action': 'allow', 'protocol': 'tcp', 'enabled': True, + 'ip_version': "4" + } + }) + self.mockclient.show_firewall_rule.assert_not_called() def test_update(self): rsrc = self.create_firewall_rule() - neutronclient.Client.update_firewall_rule( - '5678', {'firewall_rule': {'protocol': 'icmp'}}) - self.m.ReplayAll() + self.mockclient.update_firewall_rule.return_value = None + scheduler.TaskRunner(rsrc.create)() props = self.tmpl['resources']['firewall_rule']['properties'].copy() @@ -593,17 +758,33 @@ update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() - self.m.VerifyAll() + self.mockclient.create_firewall_rule.assert_called_once_with({ + 'firewall_rule': { + 'name': 'test-firewall-rule', 'shared': True, + 'action': 'allow', 'protocol': 'tcp', 'enabled': True, + 'ip_version': "4" + } + }) + self.mockclient.update_firewall_rule.assert_called_once_with( + '5678', {'firewall_rule': {'protocol': 'icmp'}}) def test_update_protocol_to_any(self): rsrc = self.create_firewall_rule() - neutronclient.Client.update_firewall_rule( - '5678', {'firewall_rule': {'protocol': None}}) - self.m.ReplayAll() + self.mockclient.update_firewall_rule.return_value = None + scheduler.TaskRunner(rsrc.create)() # update to 'any' protocol props = self.tmpl['resources']['firewall_rule']['properties'].copy() props['protocol'] = 'any' update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() - self.m.VerifyAll() + + self.mockclient.create_firewall_rule.assert_called_once_with({ + 'firewall_rule': { + 'name': 'test-firewall-rule', 'shared': True, + 'action': 'allow', 'protocol': 'tcp', 'enabled': True, + 'ip_version': "4" + } + }) + self.mockclient.update_firewall_rule.assert_called_once_with( + '5678', {'firewall_rule': {'protocol': None}}) diff -Nru heat-11.0.0~b1/heat/tests/openstack/neutron/test_neutron_loadbalancer.py heat-11.0.0~b2/heat/tests/openstack/neutron/test_neutron_loadbalancer.py --- heat-11.0.0~b1/heat/tests/openstack/neutron/test_neutron_loadbalancer.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/openstack/neutron/test_neutron_loadbalancer.py 2018-06-07 22:12:28.000000000 +0000 @@ -12,7 +12,6 @@ # under the License. import mock -import mox from neutronclient.common import exceptions from neutronclient.neutron import v2_0 as neutronV20 from neutronclient.v2_0 import client as neutronclient @@ -881,43 +880,52 @@ def setUp(self): super(LoadBalancerTest, self).setUp() self.fc = fakes_nova.FakeClient() - self.m.StubOutWithMock(neutronclient.Client, 'create_member') - self.m.StubOutWithMock(neutronclient.Client, 'delete_member') - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') + + self.mc = mock.Mock(spec=neutronclient.Client) + self.patchobject(neutronclient, 'Client', return_value=self.mc) + self.patchobject(nova.NovaClientPlugin, '_create') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) - def create_load_balancer(self): - nova.NovaClientPlugin._create().AndReturn(self.fc) - neutronclient.Client.create_member({ - 'member': { - 'pool_id': 'pool123', 'protocol_port': 8080, - 'address': '1.2.3.4'}} - ).AndReturn({'member': {'id': 'member5678'}}) + def create_load_balancer(self, extra_create_mocks=[]): + nova.NovaClientPlugin._create.return_value = self.fc + results = [{'member': {'id': 'member5678'}}] + for m in extra_create_mocks: + results.append(m) + self.mc.create_member.side_effect = results snippet = template_format.parse(lb_template) self.stack = utils.parse_stack(snippet) resource_defns = self.stack.t.resource_definitions(self.stack) return loadbalancer.LoadBalancer( 'lb', resource_defns['lb'], self.stack) + def validate_create_load_balancer(self, create_count=1): + if create_count > 1: + self.assertEqual(create_count, self.mc.create_member.call_count) + self.mc.create_member.assert_called_with({ + 'member': { + 'pool_id': 'pool123', 'protocol_port': 8080, + 'address': '4.5.6.7'}} + ) + else: + self.mc.create_member.assert_called_once_with({ + 'member': { + 'pool_id': 'pool123', 'protocol_port': 8080, + 'address': '1.2.3.4'}} + ) + nova.NovaClientPlugin._create.assert_called_once_with() + def test_create(self): rsrc = self.create_load_balancer() - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() self.assertEqual((rsrc.CREATE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + self.validate_create_load_balancer() def test_update(self): - rsrc = self.create_load_balancer() - neutronclient.Client.delete_member(u'member5678') - neutronclient.Client.create_member({ - 'member': { - 'pool_id': 'pool123', 'protocol_port': 8080, - 'address': '4.5.6.7'}} - ).AndReturn({'member': {'id': 'memberxyz'}}) + rsrc = self.create_load_balancer( + extra_create_mocks=[{'member': {'id': 'memberxyz'}}]) - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = dict(rsrc.properties) @@ -925,14 +933,14 @@ update_template = rsrc.t.freeze(properties=props) scheduler.TaskRunner(rsrc.update, update_template)() - self.m.VerifyAll() + self.validate_create_load_balancer(create_count=2) + self.mc.delete_member.assert_called_once_with(u'member5678') def test_update_missing_member(self): rsrc = self.create_load_balancer() - neutronclient.Client.delete_member(u'member5678').AndRaise( - exceptions.NeutronClientException(status_code=404)) + self.mc.delete_member.side_effect = [ + exceptions.NeutronClientException(status_code=404)] - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() props = dict(rsrc.properties) @@ -941,103 +949,90 @@ scheduler.TaskRunner(rsrc.update, update_template)() self.assertEqual((rsrc.UPDATE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + self.mc.delete_member.assert_called_once_with(u'member5678') + self.validate_create_load_balancer() def test_delete(self): rsrc = self.create_load_balancer() - neutronclient.Client.delete_member(u'member5678') - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + self.mc.delete_member.assert_called_once_with(u'member5678') + self.validate_create_load_balancer() def test_delete_missing_member(self): rsrc = self.create_load_balancer() - neutronclient.Client.delete_member(u'member5678').AndRaise( - exceptions.NeutronClientException(status_code=404)) + self.mc.delete_member.side_effect = [ + exceptions.NeutronClientException(status_code=404)] - self.m.ReplayAll() scheduler.TaskRunner(rsrc.create)() scheduler.TaskRunner(rsrc.delete)() self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + self.mc.delete_member.assert_called_once_with(u'member5678') + self.validate_create_load_balancer() class PoolUpdateHealthMonitorsTest(common.HeatTestCase): def setUp(self): super(PoolUpdateHealthMonitorsTest, self).setUp() - self.m.StubOutWithMock(neutronclient.Client, 'create_pool') - self.m.StubOutWithMock(neutronclient.Client, 'delete_pool') - self.m.StubOutWithMock(neutronclient.Client, 'show_pool') - self.m.StubOutWithMock(neutronclient.Client, 'update_pool') - self.m.StubOutWithMock(neutronclient.Client, - 'associate_health_monitor') - self.m.StubOutWithMock(neutronclient.Client, - 'disassociate_health_monitor') - self.m.StubOutWithMock(neutronclient.Client, 'create_health_monitor') - self.m.StubOutWithMock(neutronclient.Client, 'delete_health_monitor') - self.m.StubOutWithMock(neutronclient.Client, 'show_health_monitor') - self.m.StubOutWithMock(neutronclient.Client, 'update_health_monitor') - self.m.StubOutWithMock(neutronclient.Client, 'create_vip') - self.m.StubOutWithMock(neutronclient.Client, 'delete_vip') - self.m.StubOutWithMock(neutronclient.Client, 'show_vip') - self.m.StubOutWithMock(neutronV20, 'find_resourceid_by_name_or_id') + self.mc = mock.Mock(spec=neutronclient.Client) + self.patchobject(neutronclient, 'Client', return_value=self.mc) + self.patchobject(neutronV20, 'find_resourceid_by_name_or_id') self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) - def _create_pool_with_health_monitors(self, stack_name): - neutronclient.Client.create_health_monitor({ + def validate_create_pool_with_health_monitors(self): + self.mc.create_health_monitor.assert_called_with({ 'health_monitor': { 'delay': 3, 'max_retries': 5, 'type': u'HTTP', 'timeout': 10, 'admin_state_up': True}} - ).AndReturn({'health_monitor': {'id': '5555'}}) - - neutronclient.Client.create_health_monitor({ - 'health_monitor': { - 'delay': 3, 'max_retries': 5, 'type': u'HTTP', - 'timeout': 10, 'admin_state_up': True}} - ).AndReturn({'health_monitor': {'id': '6666'}}) - self.stub_SubnetConstraint_validate() - neutronV20.find_resourceid_by_name_or_id( - mox.IsA(neutronclient.Client), + ) + self.assertEqual(2, self.mc.create_health_monitor.call_count) + neutronV20.find_resourceid_by_name_or_id.assert_called_with( + mock.ANY, 'subnet', 'sub123', cmd_resource=None, - ).MultipleTimes().AndReturn('sub123') - neutronclient.Client.create_pool({ + ) + self.mc.create_pool.assert_called_once_with({ 'pool': { 'subnet_id': 'sub123', 'protocol': u'HTTP', - 'name': utils.PhysName(stack_name, 'pool'), + 'name': utils.PhysName(self.stack_name, 'pool'), 'lb_method': 'ROUND_ROBIN', 'admin_state_up': True}} - ).AndReturn({'pool': {'id': '5678'}}) - neutronclient.Client.associate_health_monitor( - '5678', {'health_monitor': {'id': '5555'}}).InAnyOrder() - neutronclient.Client.associate_health_monitor( - '5678', {'health_monitor': {'id': '6666'}}).InAnyOrder() - neutronclient.Client.create_vip({ + ) + self.mc.associate_health_monitor.assert_has_calls([mock.call( + '5678', {'health_monitor': {'id': '5555'}}), mock.call( + '5678', {'health_monitor': {'id': '6666'}})], + any_order=True) + self.assertEqual(2, self.mc.associate_health_monitor.call_count) + self.mc.create_vip.assert_called_once_with({ 'vip': { 'protocol': u'HTTP', 'name': 'pool.vip', 'admin_state_up': True, 'subnet_id': u'sub123', 'pool_id': '5678', 'protocol_port': 80}} - ).AndReturn({'vip': {'id': 'xyz'}}) - neutronclient.Client.show_pool('5678').AndReturn( - {'pool': {'status': 'ACTIVE'}}) - neutronclient.Client.show_vip('xyz').AndReturn( - {'vip': {'status': 'ACTIVE'}}) + ) + self.mc.show_pool.assert_called_once_with('5678') + self.mc.show_vip.assert_called_once_with('xyz') + + def _create_pool_with_health_monitors(self, stack_name): + self.stack_name = stack_name + self.mc.create_health_monitor.side_effect = [ + {'health_monitor': {'id': '5555'}}, + {'health_monitor': {'id': '6666'}}] + + self.stub_SubnetConstraint_validate() + neutronV20.find_resourceid_by_name_or_id.return_value = 'sub123' + self.mc.create_pool.return_value = {'pool': {'id': '5678'}} + self.mc.create_vip.return_value = {'vip': {'id': 'xyz'}} + self.mc.show_pool.return_value = {'pool': {'status': 'ACTIVE'}} + self.mc.show_vip.return_value = {'vip': {'status': 'ACTIVE'}} def test_update_pool_with_references_to_health_monitors(self): snippet = template_format.parse(pool_with_health_monitors_template) self.stack = utils.parse_stack(snippet) - self._create_pool_with_health_monitors(self.stack.name) - - neutronclient.Client.disassociate_health_monitor( - '5678', mox.IsA(six.string_types)) - - self.m.ReplayAll() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) @@ -1048,19 +1043,15 @@ self.stack.update(updated_stack) self.assertEqual((self.stack.UPDATE, self.stack.COMPLETE), self.stack.state) - self.m.VerifyAll() + self.validate_create_pool_with_health_monitors() + self.mc.disassociate_health_monitor.assert_called_once_with( + '5678', mock.ANY) def test_update_pool_with_empty_list_of_health_monitors(self): snippet = template_format.parse(pool_with_health_monitors_template) self.stack = utils.parse_stack(snippet) self._create_pool_with_health_monitors(self.stack.name) - neutronclient.Client.disassociate_health_monitor( - '5678', '5555').InAnyOrder() - neutronclient.Client.disassociate_health_monitor( - '5678', '6666').InAnyOrder() - - self.m.ReplayAll() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) @@ -1070,19 +1061,18 @@ self.stack.update(updated_stack) self.assertEqual((self.stack.UPDATE, self.stack.COMPLETE), self.stack.state) - self.m.VerifyAll() + self.mc.disassociate_health_monitor.assert_has_calls( + [mock.call('5678', '5555'), mock.call('5678', '6666')], + any_order=True) + self.assertEqual(2, self.mc.disassociate_health_monitor.call_count) + + self.validate_create_pool_with_health_monitors() def test_update_pool_without_health_monitors(self): snippet = template_format.parse(pool_with_health_monitors_template) self.stack = utils.parse_stack(snippet) self._create_pool_with_health_monitors(self.stack.name) - neutronclient.Client.disassociate_health_monitor( - '5678', '5555').InAnyOrder() - neutronclient.Client.disassociate_health_monitor( - '5678', '6666').InAnyOrder() - - self.m.ReplayAll() self.stack.create() self.assertEqual((self.stack.CREATE, self.stack.COMPLETE), self.stack.state) @@ -1092,4 +1082,8 @@ self.stack.update(updated_stack) self.assertEqual((self.stack.UPDATE, self.stack.COMPLETE), self.stack.state) - self.m.VerifyAll() + self.mc.disassociate_health_monitor.assert_has_calls( + [mock.call('5678', '5555'), mock.call('5678', '6666')], + any_order=True) + self.assertEqual(2, self.mc.disassociate_health_monitor.call_count) + self.validate_create_pool_with_health_monitors() diff -Nru heat-11.0.0~b1/heat/tests/openstack/neutron/test_neutron_security_group.py heat-11.0.0~b2/heat/tests/openstack/neutron/test_neutron_security_group.py --- heat-11.0.0~b1/heat/tests/openstack/neutron/test_neutron_security_group.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/openstack/neutron/test_neutron_security_group.py 2018-06-07 22:12:34.000000000 +0000 @@ -11,7 +11,7 @@ # License for the specific language governing permissions and limitations # under the License. -import mox +import mock from neutronclient.common import exceptions as neutron_exc from neutronclient.neutron import v2_0 as neutronV20 @@ -86,17 +86,16 @@ def setUp(self): super(SecurityGroupTest, self).setUp() - self.m.StubOutWithMock(neutronclient.Client, 'create_security_group') - self.m.StubOutWithMock( - neutronclient.Client, 'create_security_group_rule') - self.m.StubOutWithMock(neutronclient.Client, 'show_security_group') - self.m.StubOutWithMock( - neutronclient.Client, 'delete_security_group_rule') - self.m.StubOutWithMock(neutronclient.Client, 'delete_security_group') - self.m.StubOutWithMock(neutronclient.Client, 'update_security_group') + self.mockclient = mock.Mock(spec=neutronclient.Client) + self.patchobject(neutronclient, 'Client', return_value=self.mockclient) + + def lookup(client, lookup_type, name, cmd_resource): + return name + + self.patchobject(neutronV20, 'find_resourceid_by_name_or_id', + side_effect=lookup) self.patchobject(neutron.NeutronClientPlugin, 'has_extension', return_value=True) - self.m.StubOutWithMock(neutronV20, 'find_resourceid_by_name_or_id') def create_stack(self, templ): t = template_format.parse(templ) @@ -196,24 +195,7 @@ # create script sg_name = utils.PhysName('test_stack', 'the_sg') - neutronV20.find_resourceid_by_name_or_id( - mox.IsA(neutronclient.Client), - 'security_group', - 'wwww', - cmd_resource=None, - ).MultipleTimes().AndReturn('wwww') - neutronV20.find_resourceid_by_name_or_id( - mox.IsA(neutronclient.Client), - 'security_group', - 'xxxx', - cmd_resource=None, - ).MultipleTimes().AndReturn('xxxx') - neutronclient.Client.create_security_group({ - 'security_group': { - 'name': sg_name, - 'description': 'HTTP and SSH access' - } - }).AndReturn({ + self.mockclient.create_security_group.return_value = { 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': sg_name, @@ -243,195 +225,176 @@ }], 'id': 'aaaa' } - }) + } - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': '22', - 'ethertype': 'IPv4', - 'port_range_max': '22', - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': '22', - 'ethertype': 'IPv4', - 'port_range_max': '22', - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'bbbb' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': '80', - 'ethertype': 'IPv4', - 'port_range_max': '80', - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': '80', - 'ethertype': 'IPv4', - 'port_range_max': '80', - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'cccc' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': 'wwww', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': 'wwww', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'dddd' - } - }) - neutronclient.Client.show_security_group('aaaa').AndReturn({ - 'security_group': { - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'name': sg_name, - 'description': 'HTTP and SSH access', - 'security_group_rules': [{ - "direction": "egress", - "ethertype": "IPv4", - "id": "aaaa-1", - "port_range_max": None, - "port_range_min": None, - "protocol": None, - "remote_group_id": None, - "remote_ip_prefix": None, - "security_group_id": "aaaa", - "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" - }, { - "direction": "egress", - "ethertype": "IPv6", - "id": "aaaa-2", - "port_range_max": None, - "port_range_min": None, - "protocol": None, - "remote_group_id": None, - "remote_ip_prefix": None, - "security_group_id": "aaaa", - "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" - }], - 'id': 'aaaa' - } - }) - neutronclient.Client.delete_security_group_rule('aaaa-1').AndReturn( - None) - neutronclient.Client.delete_security_group_rule('aaaa-2').AndReturn( - None) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': '10.0.1.0/24', - 'port_range_min': '22', - 'ethertype': 'IPv4', - 'port_range_max': '22', - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': '10.0.1.0/24', - 'port_range_min': '22', - 'ethertype': 'IPv4', - 'port_range_max': '22', - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'eeee' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': 'xxxx', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': 'xxxx', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa', - 'id': 'ffff' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': 'aaaa', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': 'aaaa', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa', - 'id': 'gggg' - } - }) + self.mockclient.create_security_group_rule.side_effect = [ + { + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'port_range_min': '22', + 'ethertype': 'IPv4', + 'port_range_max': '22', + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'bbbb' + } + }, + { + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'port_range_min': '80', + 'ethertype': 'IPv4', + 'port_range_max': '80', + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'cccc' + } + }, + { + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': 'wwww', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'dddd' + } + }, + { + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': None, + 'remote_ip_prefix': '10.0.1.0/24', + 'port_range_min': '22', + 'ethertype': 'IPv4', + 'port_range_max': '22', + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'eeee' + } + }, + { + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': 'xxxx', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': None, + 'security_group_id': 'aaaa', + 'id': 'ffff' + } + }, + { + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': 'aaaa', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': None, + 'security_group_id': 'aaaa', + 'id': 'gggg' + } + }, + { + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': None, + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': None, + 'security_group_id': 'aaaa', + 'id': 'hhhh' + } + }, + { + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': None, + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv6', + 'port_range_max': None, + 'protocol': None, + 'security_group_id': 'aaaa', + 'id': 'iiii' + } + }, + { + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': None, + 'remote_ip_prefix': '10.0.0.10/24', + 'port_range_min': '22', + 'ethertype': 'IPv4', + 'port_range_max': '22', + 'protocol': 'tcp', + 'security_group_id': 'aaaa', + 'id': 'jjjj' + } + }, + ] + + self.mockclient.show_security_group.side_effect = [ + { + 'security_group': { + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'name': sg_name, + 'description': 'HTTP and SSH access', + 'security_group_rules': [{ + "direction": "egress", + "ethertype": "IPv4", + "id": "aaaa-1", + "port_range_max": None, + "port_range_min": None, + "protocol": None, + "remote_group_id": None, + "remote_ip_prefix": None, + "security_group_id": "aaaa", + "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" + }, { + "direction": "egress", + "ethertype": "IPv6", + "id": "aaaa-2", + "port_range_max": None, + "port_range_min": None, + "protocol": None, + "remote_group_id": None, + "remote_ip_prefix": None, + "security_group_id": "aaaa", + "tenant_id": "f18ca530cc05425e8bac0a5ff92f7e88" + }], + 'id': 'aaaa' + } + }, + show_created, + { + 'security_group': { + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'name': 'sc1', + 'description': '', + 'security_group_rules': [], + 'id': 'aaaa' + } + }, + show_created, + ] + self.mockclient.delete_security_group_rule.return_value = None # update script - neutronclient.Client.update_security_group( - 'aaaa', - {'security_group': { - 'description': 'SSH access for private network', - 'name': 'myrules'}} - ).AndReturn({ + self.mockclient.update_security_group.return_value = { 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', 'name': 'myrules', @@ -439,102 +402,11 @@ 'security_group_rules': [], 'id': 'aaaa' } - }) - - neutronclient.Client.show_security_group('aaaa').AndReturn( - show_created) - neutronclient.Client.delete_security_group_rule('bbbb').AndReturn(None) - neutronclient.Client.delete_security_group_rule('cccc').AndReturn(None) - neutronclient.Client.delete_security_group_rule('dddd').AndReturn(None) - neutronclient.Client.delete_security_group_rule('eeee').AndReturn(None) - neutronclient.Client.delete_security_group_rule('ffff').AndReturn(None) - neutronclient.Client.delete_security_group_rule('gggg').AndReturn(None) - - neutronclient.Client.show_security_group('aaaa').AndReturn({ - 'security_group': { - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'name': 'sc1', - 'description': '', - 'security_group_rules': [], - 'id': 'aaaa' - } - }) - - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'ethertype': 'IPv4', - 'security_group_id': 'aaaa', - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa', - 'id': 'hhhh' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'ethertype': 'IPv6', - 'security_group_id': 'aaaa', - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv6', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa', - 'id': 'iiii' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '10.0.0.10/24', - 'port_range_min': '22', - 'ethertype': 'IPv4', - 'port_range_max': '22', - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndReturn({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '10.0.0.10/24', - 'port_range_min': '22', - 'ethertype': 'IPv4', - 'port_range_max': '22', - 'protocol': 'tcp', - 'security_group_id': 'aaaa', - 'id': 'jjjj' - } - }) + } # delete script - neutronclient.Client.show_security_group('aaaa').AndReturn( - show_created) - neutronclient.Client.delete_security_group_rule('bbbb').AndReturn(None) - neutronclient.Client.delete_security_group_rule('cccc').AndReturn(None) - neutronclient.Client.delete_security_group_rule('dddd').AndReturn(None) - neutronclient.Client.delete_security_group_rule('eeee').AndReturn(None) - neutronclient.Client.delete_security_group_rule('ffff').AndReturn(None) - neutronclient.Client.delete_security_group_rule('gggg').AndReturn(None) - neutronclient.Client.delete_security_group('aaaa').AndReturn(None) + self.mockclient.delete_security_group.return_value = None - self.m.ReplayAll() stack = self.create_stack(self.test_template) sg = stack['the_sg'] @@ -545,219 +417,255 @@ stack.update(updated_stack) stack.delete() - self.m.VerifyAll() - def test_security_group_exception(self): - # create script - sg_name = utils.PhysName('test_stack', 'the_sg') - neutronV20.find_resourceid_by_name_or_id( - mox.IsA(neutronclient.Client), - 'security_group', - 'wwww', - cmd_resource=None, - ).MultipleTimes().AndReturn('wwww') - neutronV20.find_resourceid_by_name_or_id( - mox.IsA(neutronclient.Client), - 'security_group', - 'xxxx', - cmd_resource=None, - ).MultipleTimes().AndReturn('xxxx') - neutronclient.Client.create_security_group({ + self.mockclient.create_security_group.assert_called_once_with({ 'security_group': { 'name': sg_name, 'description': 'HTTP and SSH access' } - }).AndReturn({ - 'security_group': { - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'name': sg_name, - 'description': 'HTTP and SSH access', - 'security_group_rules': [], - 'id': 'aaaa' - } }) - - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': '22', - 'ethertype': 'IPv4', - 'port_range_max': '22', - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': None, - 'remote_ip_prefix': '0.0.0.0/0', - 'port_range_min': '80', - 'ethertype': 'IPv4', - 'port_range_max': '80', - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'ingress', - 'remote_group_id': 'wwww', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - neutronclient.Client.show_security_group('aaaa').AndReturn({ - 'security_group': { - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'name': sg_name, - 'description': 'HTTP and SSH access', - 'security_group_rules': [], - 'id': 'aaaa' - } - }) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': None, - 'remote_ip_prefix': '10.0.1.0/24', - 'port_range_min': '22', - 'ethertype': 'IPv4', - 'port_range_max': '22', - 'protocol': 'tcp', - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': 'xxxx', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - neutronclient.Client.create_security_group_rule({ - 'security_group_rule': { - 'direction': 'egress', - 'remote_group_id': 'aaaa', - 'remote_ip_prefix': None, - 'port_range_min': None, - 'ethertype': 'IPv4', - 'port_range_max': None, - 'protocol': None, - 'security_group_id': 'aaaa' - } - }).AndRaise( - neutron_exc.Conflict()) - - # delete script - neutronclient.Client.show_security_group('aaaa').AndReturn({ - 'security_group': { - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'name': 'sc1', - 'description': '', - 'security_group_rules': [{ + self.mockclient.create_security_group_rule.assert_has_calls([ + mock.call({ + 'security_group_rule': { 'direction': 'ingress', - 'protocol': 'tcp', - 'port_range_max': '22', - 'id': 'bbbb', - 'ethertype': 'IPv4', - 'security_group_id': 'aaaa', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'port_range_min': '22' - }, { - 'direction': 'ingress', - 'protocol': 'tcp', - 'port_range_max': '80', - 'id': 'cccc', + 'port_range_min': '22', 'ethertype': 'IPv4', - 'security_group_id': 'aaaa', + 'port_range_max': '22', + 'protocol': 'tcp', + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'ingress', 'remote_group_id': None, 'remote_ip_prefix': '0.0.0.0/0', - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'port_range_min': '80' - }, { - 'direction': 'ingress', - 'protocol': 'tcp', - 'port_range_max': None, - 'id': 'dddd', + 'port_range_min': '80', 'ethertype': 'IPv4', - 'security_group_id': 'aaaa', + 'port_range_max': '80', + 'protocol': 'tcp', + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'ingress', 'remote_group_id': 'wwww', 'remote_ip_prefix': None, - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'port_range_min': None - }, { - 'direction': 'egress', - 'protocol': 'tcp', - 'port_range_max': '22', - 'id': 'eeee', + 'port_range_min': None, 'ethertype': 'IPv4', - 'security_group_id': 'aaaa', + 'port_range_max': None, + 'protocol': 'tcp', + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'egress', 'remote_group_id': None, 'remote_ip_prefix': '10.0.1.0/24', - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'port_range_min': '22' - }, { + 'port_range_min': '22', + 'ethertype': 'IPv4', + 'port_range_max': '22', + 'protocol': 'tcp', + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { 'direction': 'egress', + 'remote_group_id': 'xxxx', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, 'protocol': None, + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': 'aaaa', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', 'port_range_max': None, - 'id': 'ffff', + 'protocol': None, + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'egress', 'ethertype': 'IPv4', 'security_group_id': 'aaaa', - 'remote_group_id': None, - 'remote_ip_prefix': 'xxxx', - 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'port_range_min': None - }, { + } + }), + mock.call({ + 'security_group_rule': { 'direction': 'egress', - 'protocol': None, - 'port_range_max': None, - 'id': 'gggg', - 'ethertype': 'IPv4', + 'ethertype': 'IPv6', 'security_group_id': 'aaaa', + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'ingress', 'remote_group_id': None, - 'remote_ip_prefix': 'aaaa', + 'remote_ip_prefix': '10.0.0.10/24', + 'port_range_min': '22', + 'ethertype': 'IPv4', + 'port_range_max': '22', + 'protocol': 'tcp', + 'security_group_id': 'aaaa' + } + }), + ]) + self.mockclient.show_security_group.assert_called_with('aaaa') + self.mockclient.delete_security_group_rule.assert_has_calls([ + mock.call('aaaa-1'), + mock.call('aaaa-2'), + # update script + mock.call('bbbb'), + mock.call('cccc'), + mock.call('dddd'), + mock.call('eeee'), + mock.call('ffff'), + mock.call('gggg'), + # delete script + mock.call('bbbb'), + mock.call('cccc'), + mock.call('dddd'), + mock.call('eeee'), + mock.call('ffff'), + mock.call('gggg'), + ]) + self.mockclient.update_security_group.assert_called_once_with( + 'aaaa', + {'security_group': { + 'description': 'SSH access for private network', + 'name': 'myrules'}} + ) + self.mockclient.delete_security_group.assert_called_once_with('aaaa') + + def test_security_group_exception(self): + # create script + sg_name = utils.PhysName('test_stack', 'the_sg') + self.mockclient.create_security_group.return_value = { + 'security_group': { + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'name': sg_name, + 'description': 'HTTP and SSH access', + 'security_group_rules': [], + 'id': 'aaaa' + } + } + self.mockclient.create_security_group_rule.side_effect = [ + neutron_exc.Conflict, + neutron_exc.Conflict, + neutron_exc.Conflict, + neutron_exc.Conflict, + neutron_exc.Conflict, + neutron_exc.Conflict, + ] + + self.mockclient.show_security_group.side_effect = [ + { + 'security_group': { 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', - 'port_range_min': None - }], - 'id': 'aaaa'}}) - neutronclient.Client.delete_security_group_rule('bbbb').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group_rule('cccc').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group_rule('dddd').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group_rule('eeee').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group_rule('ffff').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group_rule('gggg').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_security_group('aaaa').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) + 'name': sg_name, + 'description': 'HTTP and SSH access', + 'security_group_rules': [], + 'id': 'aaaa' + } + }, + # delete script + { + 'security_group': { + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'name': 'sc1', + 'description': '', + 'security_group_rules': [{ + 'direction': 'ingress', + 'protocol': 'tcp', + 'port_range_max': '22', + 'id': 'bbbb', + 'ethertype': 'IPv4', + 'security_group_id': 'aaaa', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'port_range_min': '22' + }, { + 'direction': 'ingress', + 'protocol': 'tcp', + 'port_range_max': '80', + 'id': 'cccc', + 'ethertype': 'IPv4', + 'security_group_id': 'aaaa', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'port_range_min': '80' + }, { + 'direction': 'ingress', + 'protocol': 'tcp', + 'port_range_max': None, + 'id': 'dddd', + 'ethertype': 'IPv4', + 'security_group_id': 'aaaa', + 'remote_group_id': 'wwww', + 'remote_ip_prefix': None, + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'port_range_min': None + }, { + 'direction': 'egress', + 'protocol': 'tcp', + 'port_range_max': '22', + 'id': 'eeee', + 'ethertype': 'IPv4', + 'security_group_id': 'aaaa', + 'remote_group_id': None, + 'remote_ip_prefix': '10.0.1.0/24', + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'port_range_min': '22' + }, { + 'direction': 'egress', + 'protocol': None, + 'port_range_max': None, + 'id': 'ffff', + 'ethertype': 'IPv4', + 'security_group_id': 'aaaa', + 'remote_group_id': None, + 'remote_ip_prefix': 'xxxx', + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'port_range_min': None + }, { + 'direction': 'egress', + 'protocol': None, + 'port_range_max': None, + 'id': 'gggg', + 'ethertype': 'IPv4', + 'security_group_id': 'aaaa', + 'remote_group_id': None, + 'remote_ip_prefix': 'aaaa', + 'tenant_id': 'f18ca530cc05425e8bac0a5ff92f7e88', + 'port_range_min': None + }], + 'id': 'aaaa'} + }, + neutron_exc.NeutronClientException(status_code=404), + ] - neutronclient.Client.show_security_group('aaaa').AndRaise( + # delete script + self.mockclient.delete_security_group_rule.side_effect = ( + neutron_exc.NeutronClientException(status_code=404)) + self.mockclient.delete_security_group.side_effect = ( neutron_exc.NeutronClientException(status_code=404)) - self.m.ReplayAll() stack = self.create_stack(self.test_template) sg = stack['the_sg'] @@ -770,7 +678,96 @@ sg.resource_id = 'aaaa' stack.delete() - self.m.VerifyAll() + self.mockclient.create_security_group.assert_called_once_with({ + 'security_group': { + 'name': sg_name, + 'description': 'HTTP and SSH access' + } + }) + self.mockclient.create_security_group_rule.assert_has_calls([ + mock.call({ + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'port_range_min': '22', + 'ethertype': 'IPv4', + 'port_range_max': '22', + 'protocol': 'tcp', + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': None, + 'remote_ip_prefix': '0.0.0.0/0', + 'port_range_min': '80', + 'ethertype': 'IPv4', + 'port_range_max': '80', + 'protocol': 'tcp', + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'ingress', + 'remote_group_id': 'wwww', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': 'tcp', + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': None, + 'remote_ip_prefix': '10.0.1.0/24', + 'port_range_min': '22', + 'ethertype': 'IPv4', + 'port_range_max': '22', + 'protocol': 'tcp', + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': 'xxxx', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': None, + 'security_group_id': 'aaaa' + } + }), + mock.call({ + 'security_group_rule': { + 'direction': 'egress', + 'remote_group_id': 'aaaa', + 'remote_ip_prefix': None, + 'port_range_min': None, + 'ethertype': 'IPv4', + 'port_range_max': None, + 'protocol': None, + 'security_group_id': 'aaaa' + } + }), + ]) + self.mockclient.show_security_group.assert_called_with('aaaa') + self.mockclient.delete_security_group_rule.assert_has_calls([ + mock.call('bbbb'), + mock.call('cccc'), + mock.call('dddd'), + mock.call('eeee'), + mock.call('ffff'), + mock.call('gggg'), + ]) + self.mockclient.delete_security_group.assert_called_with('aaaa') def test_security_group_validate(self): stack = self.create_stack(self.test_template_validate) diff -Nru heat-11.0.0~b1/heat/tests/openstack/nova/fakes.py heat-11.0.0~b2/heat/tests/openstack/nova/fakes.py --- heat-11.0.0~b1/heat/tests/openstack/nova/fakes.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/openstack/nova/fakes.py 2018-06-07 22:12:28.000000000 +0000 @@ -87,6 +87,9 @@ # Servers # def get_servers_detail(self, **kw): + if kw.get('marker') == '56789': + return (200, {"servers": []}) + return ( 200, {"servers": [{"id": "1234", diff -Nru heat-11.0.0~b1/heat/tests/openstack/nova/test_server.py heat-11.0.0~b2/heat/tests/openstack/nova/test_server.py --- heat-11.0.0~b1/heat/tests/openstack/nova/test_server.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/openstack/nova/test_server.py 2018-06-07 22:12:28.000000000 +0000 @@ -19,6 +19,7 @@ from keystoneauth1 import exceptions as ks_exceptions from neutronclient.v2_0 import client as neutronclient from novaclient import exceptions as nova_exceptions +from novaclient.v2 import client as novaclient from oslo_serialization import jsonutils from oslo_utils import uuidutils import requests @@ -4426,6 +4427,10 @@ self.port_show = self.patchobject(neutronclient.Client, 'show_port') + self.server_get = self.patchobject(novaclient.servers.ServerManager, + 'get') + self.server_get.return_value = self.fc.servers.list()[1] + def neutron_side_effect(*args): if args[0] == 'subnet': return '1234' @@ -4905,10 +4910,14 @@ t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) server.resource_id = 'test_server' - port_ids = [{'id': 1122}, {'id': 3344}] - external_port_ids = [{'id': 5566}] + port_ids = [{'id': '1122'}, {'id': '3344'}] + external_port_ids = [{'id': '5566'}] server._data = {"internal_ports": jsonutils.dumps(port_ids), "external_ports": jsonutils.dumps(external_port_ids)} + nova_server = self.fc.servers.list()[1] + server.client = mock.Mock() + server.client().servers.get.return_value = nova_server + self.patchobject(nova.NovaClientPlugin, 'interface_detach', return_value=True) self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', @@ -4918,25 +4927,61 @@ # check, that the ports were detached from server nova.NovaClientPlugin.interface_detach.assert_has_calls([ - mock.call('test_server', 1122), - mock.call('test_server', 3344), - mock.call('test_server', 5566)]) + mock.call('test_server', '1122'), + mock.call('test_server', '3344'), + mock.call('test_server', '5566')]) def test_prepare_ports_for_replace_not_found(self): t, stack, server = self._return_template_stack_and_rsrc_defn( 'test', tmpl_server_with_network_id) server.resource_id = 'test_server' - port_ids = [{'id': 1122}, {'id': 3344}] - external_port_ids = [{'id': 5566}] + port_ids = [{'id': '1122'}, {'id': '3344'}] + external_port_ids = [{'id': '5566'}] server._data = {"internal_ports": jsonutils.dumps(port_ids), "external_ports": jsonutils.dumps(external_port_ids)} self.patchobject(nova.NovaClientPlugin, 'fetch_server', side_effect=nova_exceptions.NotFound(404)) check_detach = self.patchobject(nova.NovaClientPlugin, 'check_interface_detach') + nova_server = self.fc.servers.list()[1] + nova_server.status = 'DELETED' + self.server_get.return_value = nova_server server.prepare_for_replace() check_detach.assert_not_called() + self.assertEqual(0, self.port_delete.call_count) + + def test_prepare_ports_for_replace_error_state(self): + t, stack, server = self._return_template_stack_and_rsrc_defn( + 'test', tmpl_server_with_network_id) + server.resource_id = 'test_server' + port_ids = [{'id': '1122'}, {'id': '3344'}] + external_port_ids = [{'id': '5566'}] + server._data = {"internal_ports": jsonutils.dumps(port_ids), + "external_ports": jsonutils.dumps(external_port_ids)} + + nova_server = self.fc.servers.list()[1] + nova_server.status = 'ERROR' + self.server_get.return_value = nova_server + + self.patchobject(nova.NovaClientPlugin, 'interface_detach', + return_value=True) + self.patchobject(nova.NovaClientPlugin, 'check_interface_detach', + return_value=True) + data_set = self.patchobject(server, 'data_set') + data_delete = self.patchobject(server, 'data_delete') + + server.prepare_for_replace() + + # check, that the internal ports were deleted + self.assertEqual(2, self.port_delete.call_count) + self.assertEqual(('1122',), self.port_delete.call_args_list[0][0]) + self.assertEqual(('3344',), self.port_delete.call_args_list[1][0]) + data_set.assert_has_calls(( + mock.call('internal_ports', + '[{"id": "3344"}]'), + mock.call('internal_ports', '[{"id": "1122"}]'))) + data_delete.assert_called_once_with('internal_ports') def test_prepare_ports_for_replace_not_created(self): t, stack, server = self._return_template_stack_and_rsrc_defn( diff -Nru heat-11.0.0~b1/heat/tests/test_convg_stack.py heat-11.0.0~b2/heat/tests/test_convg_stack.py --- heat-11.0.0~b1/heat/tests/test_convg_stack.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/test_convg_stack.py 2018-06-07 22:12:28.000000000 +0000 @@ -800,7 +800,7 @@ res = mock.MagicMock() res.id = 6 res.name = 'E' - res.requires = [3] + res.requires = {3} res.replaces = 1 res.current_template_id = 2 db_resources[6] = res diff -Nru heat-11.0.0~b1/heat/tests/test_engine_service.py heat-11.0.0~b2/heat/tests/test_engine_service.py --- heat-11.0.0~b1/heat/tests/test_engine_service.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/test_engine_service.py 2018-06-07 22:12:28.000000000 +0000 @@ -14,7 +14,6 @@ import uuid import mock -import mox from oslo_config import cfg from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils as json @@ -117,14 +116,14 @@ class StackCreateTest(common.HeatTestCase): def test_wordpress_single_instance_stack_create(self): stack = tools.get_stack('test_stack', utils.dummy_context()) - tools.setup_mocks(self.m, stack) - self.m.ReplayAll() + fc = tools.setup_mocks_with_mock(self, stack) stack.store() stack.create() self.assertIsNotNone(stack['WebServer']) self.assertGreater(int(stack['WebServer'].resource_id), 0) self.assertNotEqual(stack['WebServer'].ipaddress, '0.0.0.0') + tools.validate_setup_mocks_with_mock(stack, fc) def test_wordpress_single_instance_stack_adopt(self): t = template_format.parse(tools.wp_template) @@ -142,14 +141,16 @@ template, adopt_stack_data=adopt_data) - tools.setup_mocks(self.m, stack) - self.m.ReplayAll() + fc = tools.setup_mocks_with_mock(self, stack, + mock_image_constraint=False) stack.store() stack.adopt() self.assertIsNotNone(stack['WebServer']) self.assertEqual('test-res-id', stack['WebServer'].resource_id) self.assertEqual((stack.ADOPT, stack.COMPLETE), stack.state) + tools.validate_setup_mocks_with_mock( + stack, fc, mock_image_constraint=False, validate_create=False) def test_wordpress_single_instance_stack_adopt_fail(self): t = template_format.parse(tools.wp_template) @@ -167,8 +168,8 @@ template, adopt_stack_data=adopt_data) - tools.setup_mocks(self.m, stack) - self.m.ReplayAll() + fc = tools.setup_mocks_with_mock(self, stack, + mock_image_constraint=False) stack.store() stack.adopt() self.assertIsNotNone(stack['WebServer']) @@ -176,12 +177,13 @@ 'Resource ID was not provided.') self.assertEqual(expected, stack.status_reason) self.assertEqual((stack.ADOPT, stack.FAILED), stack.state) + tools.validate_setup_mocks_with_mock( + stack, fc, mock_image_constraint=False, validate_create=False) def test_wordpress_single_instance_stack_delete(self): ctx = utils.dummy_context() stack = tools.get_stack('test_stack', ctx) - fc = tools.setup_mocks(self.m, stack, mock_keystone=False) - self.m.ReplayAll() + fc = tools.setup_mocks_with_mock(self, stack, mock_keystone=False) stack_id = stack.store() stack.create() @@ -202,7 +204,8 @@ db_s.refresh() self.assertEqual('DELETE', db_s.action) - self.assertEqual('COMPLETE', db_s.status, ) + self.assertEqual('COMPLETE', db_s.status) + tools.validate_setup_mocks_with_mock(stack, fc) class StackConvergenceServiceCreateUpdateTest(common.HeatTestCase): @@ -214,14 +217,12 @@ self.man = service.EngineService('a-host', 'a-topic') self.man.thread_group_mgr = tools.DummyThreadGroupManager() - def _stub_update_mocks(self, stack_to_load, stack_to_return): - self.m.StubOutWithMock(parser, 'Stack') - self.m.StubOutWithMock(parser.Stack, 'load') - parser.Stack.load(self.ctx, stack=stack_to_load - ).AndReturn(stack_to_return) + def _stub_update_mocks(self, stack_to_return): + self.patchobject(parser, 'Stack') + parser.Stack.load.return_value = stack_to_return - self.m.StubOutWithMock(templatem, 'Template') - self.m.StubOutWithMock(environment, 'Environment') + self.patchobject(templatem, 'Template') + self.patchobject(environment, 'Environment') def _test_stack_create_convergence(self, stack_name): params = {'foo': 'bar'} @@ -232,32 +233,28 @@ convergence=True) stack.converge = None - self.m.StubOutWithMock(templatem, 'Template') - self.m.StubOutWithMock(environment, 'Environment') - self.m.StubOutWithMock(parser, 'Stack') - - templatem.Template(template, files=None).AndReturn(stack.t) - environment.Environment(params).AndReturn(stack.env) - parser.Stack(self.ctx, stack.name, - stack.t, owner_id=None, - parent_resource=None, - nested_depth=0, user_creds_id=None, - stack_user_project_id=None, - timeout_mins=60, - disable_rollback=False, - convergence=True).AndReturn(stack) + self.patchobject(templatem, 'Template', return_value=stack.t) + self.patchobject(environment, 'Environment', return_value=stack.env) + self.patchobject(parser, 'Stack', return_value=stack) + self.patchobject(stack, 'validate', return_value=None) - self.m.StubOutWithMock(stack, 'validate') - stack.validate().AndReturn(None) - - self.m.ReplayAll() api_args = {'timeout_mins': 60, 'disable_rollback': False} result = self.man.create_stack(self.ctx, 'service_create_test_stack', template, params, None, api_args) db_stack = stack_object.Stack.get_by_id(self.ctx, result['stack_id']) self.assertTrue(db_stack.convergence) self.assertEqual(result['stack_id'], db_stack.id) - self.m.VerifyAll() + templatem.Template.assert_called_once_with(template, files=None) + environment.Environment.assert_called_once_with(params) + parser.Stack.assert_called_once_with( + self.ctx, stack.name, + stack.t, owner_id=None, + parent_resource=None, + nested_depth=0, user_creds_id=None, + stack_user_project_id=None, + timeout_mins=60, + disable_rollback=False, + convergence=True) def test_stack_create_enabled_convergence_engine(self): stack_name = 'service_create_test_stack' @@ -271,39 +268,18 @@ template=tools.string_template_five, convergence=True) old_stack.timeout_mins = 1 - sid = old_stack.store() - s = stack_object.Stack.get_by_id(self.ctx, sid) - + old_stack.store() stack = tools.get_stack(stack_name, self.ctx, template=tools.string_template_five_update, convergence=True) - self._stub_update_mocks(s, old_stack) - - templatem.Template(template, files=None).AndReturn(stack.t) - environment.Environment(params).AndReturn(stack.env) - parser.Stack(self.ctx, stack.name, - stack.t, - owner_id=old_stack.owner_id, - nested_depth=old_stack.nested_depth, - user_creds_id=old_stack.user_creds_id, - stack_user_project_id=old_stack.stack_user_project_id, - timeout_mins=60, - disable_rollback=False, - parent_resource=None, - strict_validate=True, - tenant_id=old_stack.tenant_id, - username=old_stack.username, - convergence=old_stack.convergence, - current_traversal=old_stack.current_traversal, - prev_raw_template_id=old_stack.prev_raw_template_id, - current_deps=old_stack.current_deps, - converge=False).AndReturn(stack) + self._stub_update_mocks(old_stack) - self.m.StubOutWithMock(stack, 'validate') - stack.validate().AndReturn(None) + templatem.Template.return_value = stack.t + environment.Environment.return_value = stack.env + parser.Stack.return_value = stack - self.m.ReplayAll() + self.patchobject(stack, 'validate', return_value=None) api_args = {'timeout_mins': 60, 'disable_rollback': False, rpc_api.PARAM_CONVERGE: False} @@ -313,7 +289,10 @@ self.assertEqual(old_stack.identifier(), result) self.assertIsInstance(result, dict) self.assertTrue(result['stack_id']) - self.m.VerifyAll() + parser.Stack.load.assert_called_once_with( + self.ctx, stack=mock.ANY) + templatem.Template.assert_called_once_with(template, files=None) + environment.Environment.assert_called_once_with(params) class StackServiceAuthorizeTest(common.HeatTestCase): @@ -333,25 +312,19 @@ @tools.stack_context('service_authorize_user_attribute_error_test_stack') def test_stack_authorize_stack_user_attribute_error(self): - self.m.StubOutWithMock(json, 'loads') - json.loads(None).AndRaise(AttributeError) - self.m.ReplayAll() + self.patchobject(json, 'loads', side_effect=AttributeError) self.assertFalse(self.eng._authorize_stack_user(self.ctx, self.stack, 'foo')) - self.m.VerifyAll() + json.loads.assert_called_once_with(None) @tools.stack_context('service_authorize_stack_user_type_error_test_stack') def test_stack_authorize_stack_user_type_error(self): - self.m.StubOutWithMock(json, 'loads') - json.loads(mox.IgnoreArg()).AndRaise(TypeError) - self.m.ReplayAll() - + self.patchobject(json, 'loads', side_effect=TypeError) self.assertFalse(self.eng._authorize_stack_user(self.ctx, self.stack, 'foo')) - - self.m.VerifyAll() + json.loads.assert_called_once_with(None) def test_stack_authorize_stack_user(self): self.ctx = utils.dummy_context() @@ -359,11 +332,10 @@ stack_name = 'stack_authorize_stack_user' stack = tools.get_stack(stack_name, self.ctx, user_policy_template) self.stack = stack - fc = tools.setup_mocks(self.m, stack) + fc = tools.setup_mocks_with_mock(self, stack) self.patchobject(fc.servers, 'delete', side_effect=fakes_nova.fake_exception()) - self.m.ReplayAll() stack.store() stack.create() @@ -375,8 +347,7 @@ self.assertFalse(self.eng._authorize_stack_user( self.ctx, self.stack, 'NoSuchResource')) - - self.m.VerifyAll() + tools.validate_setup_mocks_with_mock(stack, fc) def test_stack_authorize_stack_user_user_id(self): self.ctx = utils.dummy_context(user_id=str(uuid.uuid4())) @@ -715,9 +686,7 @@ @tools.stack_context('service_export_stack') def test_export_stack(self): cfg.CONF.set_override('enable_stack_abandon', True) - self.m.StubOutWithMock(parser.Stack, 'load') - parser.Stack.load(self.ctx, - stack=mox.IgnoreArg()).AndReturn(self.stack) + self.patchobject(parser.Stack, 'load', return_value=self.stack) expected_res = { u'WebServer': { 'action': 'CREATE', @@ -728,7 +697,6 @@ 'status': 'COMPLETE', 'type': u'AWS::EC2::Instance'}} self.stack.tags = ['tag1', 'tag2'] - self.m.ReplayAll() ret = self.eng.export_stack(self.ctx, self.stack.identifier()) self.assertEqual(11, len(ret)) self.assertEqual('CREATE', ret['action']) @@ -743,22 +711,17 @@ self.assertIn('environment', ret) self.assertIn('files', ret) self.assertEqual(['tag1', 'tag2'], ret['tags']) - self.m.VerifyAll() @tools.stack_context('service_abandon_stack') def test_abandon_stack(self): cfg.CONF.set_override('enable_stack_abandon', True) - self.m.StubOutWithMock(parser.Stack, 'load') - parser.Stack.load(self.ctx, - stack=mox.IgnoreArg()).AndReturn(self.stack) - self.m.ReplayAll() + self.patchobject(parser.Stack, 'load', return_value=self.stack) self.eng.abandon_stack(self.ctx, self.stack.identifier()) ex = self.assertRaises(dispatcher.ExpectedException, self.eng.show_stack, self.ctx, self.stack.identifier(), resolve_outputs=True) self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) - self.m.VerifyAll() def test_stack_describe_nonexistent(self): non_exist_identifier = identifier.HeatIdentifier( @@ -767,18 +730,17 @@ stack_not_found_exc = exception.EntityNotFound( entity='Stack', name='test') - self.m.StubOutWithMock(service.EngineService, '_get_stack') - service.EngineService._get_stack( - self.ctx, non_exist_identifier, - show_deleted=True).AndRaise(stack_not_found_exc) - self.m.ReplayAll() + self.patchobject(service.EngineService, '_get_stack', + side_effect=stack_not_found_exc) ex = self.assertRaises(dispatcher.ExpectedException, self.eng.show_stack, self.ctx, non_exist_identifier, resolve_outputs=True) self.assertEqual(exception.EntityNotFound, ex.exc_info[0]) - self.m.VerifyAll() + service.EngineService._get_stack.assert_called_once_with( + self.ctx, non_exist_identifier, + show_deleted=True) def test_stack_describe_bad_tenant(self): non_exist_identifier = identifier.HeatIdentifier( @@ -787,28 +749,22 @@ invalid_tenant_exc = exception.InvalidTenant(target='test', actual='test') - self.m.StubOutWithMock(service.EngineService, '_get_stack') - service.EngineService._get_stack( - self.ctx, non_exist_identifier, - show_deleted=True).AndRaise(invalid_tenant_exc) - self.m.ReplayAll() + self.patchobject(service.EngineService, '_get_stack', + side_effect=invalid_tenant_exc) ex = self.assertRaises(dispatcher.ExpectedException, self.eng.show_stack, self.ctx, non_exist_identifier, resolve_outputs=True) self.assertEqual(exception.InvalidTenant, ex.exc_info[0]) - - self.m.VerifyAll() + service.EngineService._get_stack.assert_called_once_with( + self.ctx, non_exist_identifier, + show_deleted=True) @tools.stack_context('service_describe_test_stack', False) def test_stack_describe(self): - self.m.StubOutWithMock(service.EngineService, '_get_stack') s = stack_object.Stack.get_by_id(self.ctx, self.stack.id) - service.EngineService._get_stack(self.ctx, - self.stack.identifier(), - show_deleted=True).AndReturn(s) - self.m.ReplayAll() + self.patchobject(service.EngineService, '_get_stack', return_value=s) sl = self.eng.show_stack(self.ctx, self.stack.identifier(), resolve_outputs=True) @@ -829,8 +785,10 @@ self.assertIn('description', s) self.assertIn('WordPress', s['description']) self.assertIn('parameters', s) - - self.m.VerifyAll() + service.EngineService._get_stack.assert_called_once_with( + self.ctx, + self.stack.identifier(), + show_deleted=True) @tools.stack_context('service_describe_all_test_stack', False) def test_stack_describe_all(self): diff -Nru heat-11.0.0~b1/heat/tests/test_resource.py heat-11.0.0~b2/heat/tests/test_resource.py --- heat-11.0.0~b1/heat/tests/test_resource.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/test_resource.py 2018-06-07 22:12:28.000000000 +0000 @@ -570,10 +570,8 @@ {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) res.update_allowed_properties = ('Foo',) - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') - generic_rsrc.ResourceWithProps.handle_create().AndRaise( - exception.ResourceFailure) - self.m.ReplayAll() + m_hc = mock.Mock(side_effect=exception.ResourceFailure) + res.handle_create = m_hc self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.create)) @@ -586,8 +584,7 @@ # UpdateReplace flow self.assertRaises( resource.UpdateReplace, scheduler.TaskRunner(res.update, utmpl)) - - self.m.VerifyAll() + m_hc.assert_called_once_with() def test_updated_time_changes_only_when_it_changed(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', @@ -715,13 +712,15 @@ res.store() new_tmpl_id = 2 self.assertIsNotNone(res.id) - new_id = res.make_replacement(new_tmpl_id) + new_requires = {1, 2, 4} + new_id = res.make_replacement(new_tmpl_id, new_requires) new_res = resource_objects.Resource.get_obj(res.context, new_id) self.assertEqual(new_id, res.replaced_by) self.assertEqual(res.id, new_res.replaces) self.assertIsNone(new_res.physical_resource_id) self.assertEqual(new_tmpl_id, new_res.current_template_id) + self.assertEqual([4, 2, 1], new_res.requires) def test_metadata_default(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') @@ -1084,28 +1083,24 @@ tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) - self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete') - - # first attempt to create fails - generic_rsrc.ResourceWithProps.handle_create().AndRaise( - exception.ResourceInError(resource_name='test_resource', - resource_status='ERROR', - resource_type='GenericResourceType', - resource_action='CREATE', - status_reason='just because')) - # delete error resource from first attempt - generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None) - - # second attempt to create succeeds - timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) - generic_rsrc.ResourceWithProps.handle_create().AndReturn(None) - self.m.ReplayAll() + exc = exception.ResourceInError(resource_name='test_resource', + resource_status='ERROR', + resource_type='GenericResourceType', + resource_action='CREATE', + status_reason='just because') + self.patchobject(timeutils, 'retry_backoff_delay', return_value=0.01) + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', + side_effect=[exc, None]) + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete') scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) - self.m.VerifyAll() + self.assertEqual( + 2, generic_rsrc.ResourceWithProps.handle_create.call_count) + self.assertEqual( + 1, generic_rsrc.ResourceWithProps.handle_delete.call_count) + timeutils.retry_backoff_delay.assert_called_once_with( + 1, jitter_max=2.0) def test_create_fail_retry_disabled(self): cfg.CONF.set_override('action_retry_limit', 0) @@ -1113,18 +1108,15 @@ {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) - self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete') - - # attempt to create fails - generic_rsrc.ResourceWithProps.handle_create().AndRaise( - exception.ResourceInError(resource_name='test_resource', - resource_status='ERROR', - resource_type='GenericResourceType', - resource_action='CREATE', - status_reason='just because')) - self.m.ReplayAll() + exc = exception.ResourceInError(resource_name='test_resource', + resource_status='ERROR', + resource_type='GenericResourceType', + resource_action='CREATE', + status_reason='just because') + self.patchobject(timeutils, 'retry_backoff_delay') + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', + side_effect=exc) + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete') estr = ('ResourceInError: resources.test_resource: ' 'Went to status ERROR due to "just because"') @@ -1132,138 +1124,110 @@ err = self.assertRaises(exception.ResourceFailure, create) self.assertEqual(estr, six.text_type(err)) self.assertEqual((res.CREATE, res.FAILED), res.state) - - self.m.VerifyAll() + self.assertEqual( + 1, generic_rsrc.ResourceWithProps.handle_create.call_count) def test_create_deletes_fail_retry(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) - self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete') - - # first attempt to create fails - generic_rsrc.ResourceWithProps.handle_create().AndRaise( - exception.ResourceInError(resource_name='test_resource', - resource_status='ERROR', - resource_type='GenericResourceType', - resource_action='CREATE', - status_reason='just because')) - # first attempt to delete fails - generic_rsrc.ResourceWithProps.handle_delete().AndRaise( - exception.ResourceInError(resource_name='test_resource', - resource_status='ERROR', - resource_type='GenericResourceType', - resource_action='DELETE', - status_reason='delete failed')) - # second attempt to delete fails - timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) - generic_rsrc.ResourceWithProps.handle_delete().AndRaise( - exception.ResourceInError(resource_name='test_resource', - resource_status='ERROR', - resource_type='GenericResourceType', - resource_action='DELETE', - status_reason='delete failed again')) - - # third attempt to delete succeeds - timeutils.retry_backoff_delay(2, jitter_max=2.0).AndReturn(0.01) - generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None) - - # second attempt to create succeeds - timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) - generic_rsrc.ResourceWithProps.handle_create().AndReturn(None) - self.m.ReplayAll() + create_exc = exception.ResourceInError( + resource_name='test_resource', + resource_status='ERROR', + resource_type='GenericResourceType', + resource_action='CREATE', + status_reason='just because') + delete_exc = exception.ResourceInError( + resource_name='test_resource', + resource_status='ERROR', + resource_type='GenericResourceType', + resource_action='DELETE', + status_reason='delete failed') + self.patchobject(timeutils, 'retry_backoff_delay', return_value=0.01) + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_create', + side_effect=[create_exc, None]) + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete', + side_effect=[delete_exc, delete_exc, None]) scheduler.TaskRunner(res.create)() + self.assertEqual((res.CREATE, res.COMPLETE), res.state) - self.m.VerifyAll() + self.assertEqual(3, timeutils.retry_backoff_delay.call_count) + self.assertEqual( + 2, generic_rsrc.ResourceWithProps.handle_create.call_count) + self.assertEqual( + 3, generic_rsrc.ResourceWithProps.handle_delete.call_count) def test_creates_fail_retry(self): + create_exc = exception.ResourceInError( + resource_name='test_resource', + resource_status='ERROR', + resource_type='GenericResourceType', + resource_action='CREATE', + status_reason='just because') tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo', {'Foo': 'abc'}) res = generic_rsrc.ResourceWithProps('test_resource', tmpl, self.stack) - self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_create') - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_delete') - - # first attempt to create fails - generic_rsrc.ResourceWithProps.handle_create().AndRaise( - exception.ResourceInError(resource_name='test_resource', - resource_status='ERROR', - resource_type='GenericResourceType', - resource_action='CREATE', - status_reason='just because')) - # delete error resource from first attempt - generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None) - - # second attempt to create fails - timeutils.retry_backoff_delay(1, jitter_max=2.0).AndReturn(0.01) - generic_rsrc.ResourceWithProps.handle_create().AndRaise( - exception.ResourceInError(resource_name='test_resource', - resource_status='ERROR', - resource_type='GenericResourceType', - resource_action='CREATE', - status_reason='just because')) - # delete error resource from second attempt - generic_rsrc.ResourceWithProps.handle_delete().AndReturn(None) - - # third attempt to create succeeds - timeutils.retry_backoff_delay(2, jitter_max=2.0).AndReturn(0.01) - generic_rsrc.ResourceWithProps.handle_create().AndReturn(None) - self.m.ReplayAll() + self.patchobject(timeutils, 'retry_backoff_delay', return_value=0.01) + self.patchobject(generic_rsrc.ResourceWithProps, + 'handle_create', side_effect=[create_exc, create_exc, + None]) + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_delete', + return_value=None) scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) - self.m.VerifyAll() + self.assertEqual( + 3, generic_rsrc.ResourceWithProps.handle_create.call_count) + self.assertEqual( + 2, generic_rsrc.ResourceWithProps.handle_delete.call_count) def test_create_cancel(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.CancellableResource('test_resource', tmpl, self.stack) - self.m.StubOutWithMock(res, 'handle_create') - self.m.StubOutWithMock(res, 'check_create_complete') - self.m.StubOutWithMock(res, 'handle_create_cancel') - cookie = object() - res.handle_create().AndReturn(cookie) - res.check_create_complete(cookie).AndReturn(False) - res.handle_create_cancel(cookie).AndReturn(None) - - self.m.ReplayAll() + m_hc = mock.Mock(return_value=cookie) + res.handle_create = m_hc + m_ccc = mock.Mock(return_value=False) + res.check_create_complete = m_ccc + m_hcc = mock.Mock() + res.handle_create_cancel = m_hcc runner = scheduler.TaskRunner(res.create) runner.start() runner.step() runner.cancel() - - self.m.VerifyAll() + m_hc.assert_called_once_with() + m_ccc.assert_called_once_with(cookie) + m_hcc.assert_called_once_with(cookie) def test_create_cancel_exception(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.CancellableResource('test_resource', tmpl, self.stack) - - self.m.StubOutWithMock(res, 'handle_create') - self.m.StubOutWithMock(res, 'check_create_complete') - self.m.StubOutWithMock(res, 'handle_create_cancel') - cookie = object() - res.handle_create().AndReturn(cookie) - res.check_create_complete(cookie).AndReturn(False) - res.handle_create_cancel(cookie).AndRaise(Exception) - self.m.ReplayAll() + class FakeExc(Exception): + pass + + m_hc = mock.Mock(return_value=cookie) + res.handle_create = m_hc + m_ccc = mock.Mock(return_value=False) + res.check_create_complete = m_ccc + m_hcc = mock.Mock(side_effect=FakeExc) + res.handle_create_cancel = m_hcc runner = scheduler.TaskRunner(res.create) runner.start() runner.step() runner.cancel() - - self.m.VerifyAll() + m_hc.assert_called_once_with() + m_ccc.assert_called_once_with(cookie) + m_hcc.assert_called_once_with(cookie) def test_preview(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', @@ -1284,17 +1248,14 @@ 'GenericResourceType', {'Foo': 'xyz'}) prop_diff = {'Foo': 'xyz'} - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update') - generic_rsrc.ResourceWithProps.handle_update( - utmpl, mock.ANY, prop_diff).AndReturn(None) - self.m.ReplayAll() + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_update') scheduler.TaskRunner(res.update, utmpl)() self.assertEqual((res.UPDATE, res.COMPLETE), res.state) self.assertEqual({'Foo': 'xyz'}, res._stored_properties_data) - - self.m.VerifyAll() + generic_rsrc.ResourceWithProps.handle_update.assert_called_once_with( + utmpl, mock.ANY, prop_diff) def test_update_replace_with_resource_name(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', @@ -1308,18 +1269,17 @@ utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'xyz'}) - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update') + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_update', + side_effect=resource.UpdateReplace(res.name)) prop_diff = {'Foo': 'xyz'} - generic_rsrc.ResourceWithProps.handle_update( - utmpl, mock.ANY, prop_diff).AndRaise(resource.UpdateReplace( - res.name)) - self.m.ReplayAll() + # should be re-raised so parser.Stack can handle replacement updater = scheduler.TaskRunner(res.update, utmpl) ex = self.assertRaises(resource.UpdateReplace, updater) self.assertEqual('The Resource test_resource requires replacement.', six.text_type(ex)) - self.m.VerifyAll() + generic_rsrc.ResourceWithProps.handle_update.assert_called_once_with( + utmpl, mock.ANY, prop_diff) def test_update_replace_without_resource_name(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', @@ -1333,17 +1293,16 @@ utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'xyz'}) - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update') + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_update', + side_effect=resource.UpdateReplace) prop_diff = {'Foo': 'xyz'} - generic_rsrc.ResourceWithProps.handle_update( - utmpl, mock.ANY, prop_diff).AndRaise(resource.UpdateReplace()) - self.m.ReplayAll() # should be re-raised so parser.Stack can handle replacement updater = scheduler.TaskRunner(res.update, utmpl) ex = self.assertRaises(resource.UpdateReplace, updater) self.assertEqual('The Resource Unknown requires replacement.', six.text_type(ex)) - self.m.VerifyAll() + generic_rsrc.ResourceWithProps.handle_update.assert_called_once_with( + utmpl, mock.ANY, prop_diff) def test_need_update_in_init_complete_state_for_resource(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', @@ -1429,45 +1388,43 @@ utmpl = rsrc_defn.ResourceDefinition('test_resource', 'GenericResourceType', {'Foo': 'xyz'}) - tmpl_diff = {'Properties': {'Foo': 'xyz'}} prop_diff = {'Foo': 'xyz'} - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_update') - generic_rsrc.ResourceWithProps.handle_update( - utmpl, tmpl_diff, prop_diff).AndRaise(NotImplementedError) - self.m.ReplayAll() + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_update', + side_effect=NotImplementedError) updater = scheduler.TaskRunner(res.update, utmpl) self.assertRaises(exception.ResourceFailure, updater) self.assertEqual((res.UPDATE, res.FAILED), res.state) - self.m.VerifyAll() + generic_rsrc.ResourceWithProps.handle_update.assert_called_once_with( + utmpl, mock.ANY, prop_diff) def test_update_cancel(self): tmpl = rsrc_defn.ResourceDefinition('test_resource', 'Foo') res = generic_rsrc.CancellableResource('test_resource', tmpl, self.stack) - self.m.StubOutWithMock(res, '_needs_update') - self.m.StubOutWithMock(res, 'handle_update') - self.m.StubOutWithMock(res, 'check_update_complete') - self.m.StubOutWithMock(res, 'handle_update_cancel') - - res._needs_update(mock.ANY, mock.ANY, - mock.ANY, mock.ANY, - None).AndReturn(True) cookie = object() - res.handle_update(mock.ANY, mock.ANY, mock.ANY).AndReturn(cookie) - res.check_update_complete(cookie).AndReturn(False) - res.handle_update_cancel(cookie).AndReturn(None) - self.m.ReplayAll() + m_nu = mock.Mock(return_value=True) + res._needs_update = m_nu + m_hu = mock.Mock(return_value=cookie) + res.handle_update = m_hu + m_cuc = mock.Mock(return_value=False) + res.check_update_complete = m_cuc + m_huc = mock.Mock() + res.handle_update_cancel = m_huc scheduler.TaskRunner(res.create)() - runner = scheduler.TaskRunner(res.update, tmpl) runner.start() runner.step() runner.cancel() - - self.m.VerifyAll() + m_nu.assert_called_once_with( + mock.ANY, mock.ANY, + mock.ANY, mock.ANY, + None) + m_hu.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) + m_cuc.assert_called_once_with(cookie) + m_huc.assert_called_once_with(cookie) def _mock_check_res(self, mock_check=True): tmpl = rsrc_defn.ResourceDefinition('test_res', 'GenericResourceType') @@ -1590,11 +1547,10 @@ scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, - 'handle_suspend') - generic_rsrc.ResourceWithProps.handle_suspend().AndRaise(Exception()) - self.m.ReplayAll() - + class FakeExc(Exception): + pass + self.patchobject(generic_rsrc.ResourceWithProps, + 'handle_suspend', side_effect=FakeExc) suspend = scheduler.TaskRunner(res.suspend) self.assertRaises(exception.ResourceFailure, suspend) self.assertEqual((res.SUSPEND, res.FAILED), res.state) @@ -1607,15 +1563,18 @@ scheduler.TaskRunner(res.create)() self.assertEqual((res.CREATE, res.COMPLETE), res.state) - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, 'handle_resume') - generic_rsrc.ResourceWithProps.handle_resume().AndRaise(Exception()) - self.m.ReplayAll() + class FakeExc(Exception): + pass + self.patchobject(generic_rsrc.ResourceWithProps, 'handle_resume', + side_effect=FakeExc) res.state_set(res.SUSPEND, res.COMPLETE) resume = scheduler.TaskRunner(res.resume) self.assertRaises(exception.ResourceFailure, resume) self.assertEqual((res.RESUME, res.FAILED), res.state) + self.assertEqual( + 1, generic_rsrc.ResourceWithProps.handle_resume.call_count) def test_resource_class_to_cfn_template(self): @@ -2089,13 +2048,10 @@ res.action = res.CREATE res.store() self._assert_resource_lock(res.id, None, None) - res_data = {(1, True): {u'id': 1, u'name': 'A', 'attrs': {}}, - (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} - res_data = node_data.load_resources_data(res_data) pcb = mock.Mock() with mock.patch.object(resource.Resource, 'create') as mock_create: - res.create_convergence(self.stack.t.id, res_data, 'engine-007', + res.create_convergence(self.stack.t.id, {1, 3}, 'engine-007', -1, pcb) self.assertTrue(mock_create.called) @@ -2107,13 +2063,10 @@ res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) res.action = res.CREATE res.store() - res_data = {(1, True): {u'id': 1, u'name': 'A', 'attrs': {}}, - (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} - res_data = node_data.load_resources_data(res_data) pcb = mock.Mock() self.assertRaises(scheduler.Timeout, res.create_convergence, - self.stack.t.id, res_data, 'engine-007', -1, pcb) + self.stack.t.id, {1, 3}, 'engine-007', -1, pcb) def test_create_convergence_sets_requires_for_failure(self): """Ensure that requires are computed correctly. @@ -2127,11 +2080,8 @@ dummy_ex = exception.ResourceNotAvailable(resource_name=res.name) res.create = mock.Mock(side_effect=dummy_ex) self._assert_resource_lock(res.id, None, None) - res_data = {(1, True): {u'id': 5, u'name': 'A', 'attrs': {}}, - (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} - res_data = node_data.load_resources_data(res_data) self.assertRaises(exception.ResourceNotAvailable, - res.create_convergence, self.stack.t.id, res_data, + res.create_convergence, self.stack.t.id, {5, 3}, 'engine-007', self.dummy_timeout, self.dummy_event) self.assertItemsEqual([5, 3], res.requires) # The locking happens in create which we mocked out @@ -2146,11 +2096,8 @@ self.stack.adopt_stack_data = {'resources': {'test_res': { 'resource_id': 'fluffy'}}} self._assert_resource_lock(res.id, None, None) - res_data = {(1, True): {u'id': 5, u'name': 'A', 'attrs': {}}, - (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} - res_data = node_data.load_resources_data(res_data) tr = scheduler.TaskRunner(res.create_convergence, self.stack.t.id, - res_data, 'engine-007', self.dummy_timeout, + {5, 3}, 'engine-007', self.dummy_timeout, self.dummy_event) tr() mock_adopt.assert_called_once_with( @@ -2165,11 +2112,8 @@ res.store() self.stack.adopt_stack_data = {'resources': {}} self._assert_resource_lock(res.id, None, None) - res_data = {(1, True): {u'id': 5, u'name': 'A', 'attrs': {}}, - (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} - res_data = node_data.load_resources_data(res_data) tr = scheduler.TaskRunner(res.create_convergence, self.stack.t.id, - res_data, 'engine-007', self.dummy_timeout, + {5, 3}, 'engine-007', self.dummy_timeout, self.dummy_event) exc = self.assertRaises(exception.ResourceFailure, tr) self.assertIn('Resource ID was not provided', six.text_type(exc)) @@ -2187,7 +2131,7 @@ stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(stack.t, action=stack.CREATE) res = stack.resources['test_res'] - res.requires = [2] + res.requires = {2} res.action = res.CREATE res.store() self._assert_resource_lock(res.id, None, None) @@ -2203,11 +2147,8 @@ new_temp, stack_id=self.stack.id) res.stack.convergence = True - res_data = {(1, True): {u'id': 4, u'name': 'A', 'attrs': {}}, - (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} - res_data = node_data.load_resources_data(res_data) tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, - res_data, 'engine-007', 120, new_stack) + {4, 3}, 'engine-007', 120, new_stack) tr() self.assertItemsEqual([3, 4], res.requires) @@ -2238,9 +2179,8 @@ new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) - res_data = {} tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, - res_data, 'engine-007', -1, new_stack, + set(), 'engine-007', -1, new_stack, self.dummy_event) self.assertRaises(scheduler.Timeout, tr) @@ -2260,9 +2200,8 @@ new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) - res_data = {} self.assertRaises(resource.UpdateReplace, res.update_convergence, - new_temp.id, res_data, 'engine-007', + new_temp.id, set(), 'engine-007', -1, new_stack) def test_update_convergence_checks_resource_class(self): @@ -2282,9 +2221,8 @@ new_stack = parser.Stack(ctx, 'test_stack', new_temp, stack_id=self.stack.id) - res_data = {} tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, - res_data, 'engine-007', -1, new_stack, + set(), 'engine-007', -1, new_stack, self.dummy_event) self.assertRaises(resource.UpdateReplace, tr) @@ -2293,7 +2231,7 @@ def test_update_in_progress_convergence(self, mock_cfcr, mock_nu): tmpl = rsrc_defn.ResourceDefinition('test_res', 'Foo') res = generic_rsrc.GenericResource('test_res', tmpl, self.stack) - res.requires = [1, 2] + res.requires = {1, 2} res.store() rs = resource_objects.Resource.get_obj(self.stack.context, res.id) rs.update_and_save({'engine_id': 'not-this'}) @@ -2301,9 +2239,6 @@ res.stack.convergence = True - res_data = {(1, True): {u'id': 4, u'name': 'A', 'attrs': {}}, - (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} - res_data = node_data.load_resources_data(res_data) tmpl = template.Template({ 'HeatTemplateFormatVersion': '2012-12-12', 'Resources': { @@ -2312,7 +2247,7 @@ new_stack = parser.Stack(utils.dummy_context(), 'test_stack', tmpl, stack_id=self.stack.id) tr = scheduler.TaskRunner(res.update_convergence, 'template_key', - res_data, 'engine-007', self.dummy_timeout, + {4, 3}, 'engine-007', self.dummy_timeout, new_stack) ex = self.assertRaises(exception.UpdateInProgress, tr) msg = ("The resource %s is already being updated." % @@ -2320,7 +2255,7 @@ self.assertEqual(msg, six.text_type(ex)) # ensure requirements are not updated for failed resource rs = resource_objects.Resource.get_obj(self.stack.context, res.id) - self.assertEqual([1, 2], rs.requires) + self.assertEqual([2, 1], rs.requires) @mock.patch.object(resource.Resource, 'update_template_diff_properties') @mock.patch.object(resource.Resource, '_needs_update') @@ -2337,7 +2272,7 @@ stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(stack.t, action=stack.CREATE) res = stack.resources['test_res'] - res.requires = [2] + res.requires = {2} res.store() self._assert_resource_lock(res.id, None, None) @@ -2349,9 +2284,6 @@ }}, env=self.env) new_temp.store(stack.context) - res_data = {(1, True): {u'id': 4, u'name': 'A', 'attrs': {}}, - (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} - res_data = node_data.load_resources_data(res_data) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) @@ -2359,16 +2291,16 @@ res._calling_engine_id = 'engine-9' tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, - res_data, 'engine-007', 120, new_stack, + {4, 3}, 'engine-007', 120, new_stack, self.dummy_event) self.assertRaises(exception.ResourceFailure, tr) self.assertEqual(new_temp.id, res.current_template_id) # check if requires was updated - self.assertItemsEqual([3, 4], res.requires) + self.assertItemsEqual([2, 3, 4], res.requires) self.assertEqual(res.action, resource.Resource.UPDATE) self.assertEqual(res.status, resource.Resource.FAILED) - self._assert_resource_lock(res.id, None, 3) + self._assert_resource_lock(res.id, None, 2) def test_update_resource_convergence_update_replace(self): tmpl = template.Template({ @@ -2381,7 +2313,7 @@ stack.thread_group_mgr = tools.DummyThreadGroupManager() stack.converge_stack(stack.t, action=stack.CREATE) res = stack.resources['test_res'] - res.requires = [2] + res.requires = {2} res.store() self._assert_resource_lock(res.id, None, None) @@ -2395,13 +2327,10 @@ res.stack.convergence = True - res_data = {(1, True): {u'id': 4, u'name': 'A', 'attrs': {}}, - (2, True): {u'id': 3, u'name': 'B', 'attrs': {}}} - res_data = node_data.load_resources_data(res_data) new_stack = parser.Stack(utils.dummy_context(), 'test_stack', new_temp, stack_id=self.stack.id) tr = scheduler.TaskRunner(res.update_convergence, new_temp.id, - res_data, 'engine-007', 120, new_stack, + {4, 3}, 'engine-007', 120, new_stack, self.dummy_event) self.assertRaises(resource.UpdateReplace, tr) @@ -2429,7 +2358,7 @@ self.stack.state_set(self.stack.ROLLBACK, self.stack.IN_PROGRESS, 'Simulate rollback') res.restore_prev_rsrc = mock.Mock() - tr = scheduler.TaskRunner(res.update_convergence, 'new_tmpl_id', {}, + tr = scheduler.TaskRunner(res.update_convergence, 'new_tmpl_id', set(), 'engine-007', self.dummy_timeout, new_stack, self.dummy_event) self.assertRaises(resource.UpdateReplace, tr) @@ -2453,7 +2382,7 @@ self.stack.state_set(self.stack.ROLLBACK, self.stack.IN_PROGRESS, 'Simulate rollback') res.restore_prev_rsrc = mock.Mock(side_effect=Exception) - tr = scheduler.TaskRunner(res.update_convergence, 'new_tmpl_id', {}, + tr = scheduler.TaskRunner(res.update_convergence, 'new_tmpl_id', set(), 'engine-007', self.dummy_timeout, new_stack, self.dummy_event) self.assertRaises(exception.ResourceFailure, tr) @@ -2472,7 +2401,7 @@ self._assert_resource_lock(res.id, None, None) pcb = mock.Mock() with mock.patch.object(resource.Resource, 'delete') as mock_delete: - tr = scheduler.TaskRunner(res.delete_convergence, 2, {}, + tr = scheduler.TaskRunner(res.delete_convergence, 2, 'engine-007', 20, pcb) tr() self.assertTrue(mock_delete.called) @@ -2485,7 +2414,7 @@ res.current_template_id = 'same-template' res.store() res.delete = mock.Mock() - tr = scheduler.TaskRunner(res.delete_convergence, 'same-template', {}, + tr = scheduler.TaskRunner(res.delete_convergence, 'same-template', 'engine-007', self.dummy_timeout, self.dummy_event) tr() @@ -2502,7 +2431,7 @@ res.handle_delete = mock.Mock(side_effect=ValueError('test')) self._assert_resource_lock(res.id, None, None) res.stack.convergence = True - tr = scheduler.TaskRunner(res.delete_convergence, 2, {}, 'engine-007', + tr = scheduler.TaskRunner(res.delete_convergence, 2, 'engine-007', self.dummy_timeout, self.dummy_event) self.assertRaises(exception.ResourceFailure, tr) self.assertTrue(res.handle_delete.called) @@ -2540,12 +2469,11 @@ res.action = res.CREATE res.store() res.destroy = mock.Mock() - input_data = {(1, False): 4, (2, False): 5} # needed_by resource ids self._assert_resource_lock(res.id, None, None) - scheduler.TaskRunner(res.delete_convergence, 1, input_data, + scheduler.TaskRunner(res.delete_convergence, 1, 'engine-007', self.dummy_timeout, self.dummy_event)() - self.assertItemsEqual([4, 5], res.needed_by) + self.assertItemsEqual([], res.needed_by) @mock.patch.object(resource_objects.Resource, 'get_obj') def test_update_replacement_data(self, mock_get_obj): @@ -2691,7 +2619,7 @@ res.store() with mock.patch.object(resource_objects.Resource, 'delete') as resource_del: - tr = scheduler.TaskRunner(res.delete_convergence, 1, {}, + tr = scheduler.TaskRunner(res.delete_convergence, 1, 'engine-007', 1, self.dummy_event) tr() resource_del.assert_called_once_with(res.context, res.id) @@ -2702,7 +2630,7 @@ res.action = res.CREATE res.store() timeout = -1 # to emulate timeout - tr = scheduler.TaskRunner(res.delete_convergence, 1, {}, 'engine-007', + tr = scheduler.TaskRunner(res.delete_convergence, 1, 'engine-007', timeout, self.dummy_event) self.assertRaises(scheduler.Timeout, tr) @@ -2762,25 +2690,23 @@ res.state_set(res.CREATE, res.COMPLETE, 'wobble') res.default_client_name = 'neutron' - self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') - self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_delete') + self.patchobject(timeutils, 'retry_backoff_delay', return_value=0.01) # could be any exception that is_conflict(), using the neutron # client one - generic_rsrc.GenericResource.handle_delete().AndRaise( - neutron_exp.Conflict(message='foo', request_ids=[1])) + h_d_side_effects = [ + neutron_exp.Conflict(message='foo', request_ids=[1])] * ( + self.num_retries + 1) + self.patchobject( + generic_rsrc.GenericResource, 'handle_delete', + side_effect=h_d_side_effects) - for i in range(self.num_retries): - timeutils.retry_backoff_delay(i+1, jitter_max=2.0).AndReturn( - 0.01) - generic_rsrc.GenericResource.handle_delete().AndRaise( - neutron_exp.Conflict(message='foo', request_ids=[1])) - - self.m.ReplayAll() exc = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.delete)) exc_text = six.text_type(exc) self.assertIn('Conflict', exc_text) - self.m.VerifyAll() + self.assertEqual( + self.num_retries + 1, + generic_rsrc.GenericResource.handle_delete.call_count) def test_delete_retry_phys_resource_exists(self): tmpl = rsrc_defn.ResourceDefinition( @@ -2791,31 +2717,19 @@ cfg.CONF.set_override('action_retry_limit', self.num_retries) - self.m.StubOutWithMock(timeutils, 'retry_backoff_delay') - self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_delete') - self.m.StubOutWithMock(generic_rsrc.ResourceWithPropsRefPropOnDelete, - 'check_delete_complete') - - generic_rsrc.GenericResource.handle_delete().AndReturn(None) - generic_rsrc.ResourceWithPropsRefPropOnDelete.check_delete_complete( - None).AndRaise( - exception.PhysicalResourceExists(name="foo")) - - for i in range(self.num_retries): - timeutils.retry_backoff_delay(i+1, jitter_max=2.0).AndReturn( - 0.01) - generic_rsrc.GenericResource.handle_delete().AndReturn(None) - if i < self.num_retries-1: - generic_rsrc.ResourceWithPropsRefPropOnDelete.\ - check_delete_complete(None).AndRaise( - exception.PhysicalResourceExists(name="foo")) - else: - generic_rsrc.ResourceWithPropsRefPropOnDelete.\ - check_delete_complete(None).AndReturn(True) + cdc_side_effects = [exception.PhysicalResourceExists(name="foo") + ] * self.num_retries + cdc_side_effects.append(True) + self.patchobject(timeutils, 'retry_backoff_delay', return_value=0.01) + self.patchobject(generic_rsrc.GenericResource, 'handle_delete') + self.patchobject(generic_rsrc.ResourceWithPropsRefPropOnDelete, + 'check_delete_complete', + side_effect=cdc_side_effects) - self.m.ReplayAll() scheduler.TaskRunner(res.delete)() - self.m.VerifyAll() + self.assertEqual( + self.num_retries + 1, + generic_rsrc.GenericResource.handle_delete.call_count) class ResourceAdoptTest(common.HeatTestCase): @@ -4331,11 +4245,10 @@ self.stack = parser.Stack(utils.dummy_context(), 'test_stack', template.Template(self.tmpl, env=self.env), stack_id=str(uuid.uuid4())) - res_data = {} res = self.stack['bar'] pcb = mock.Mock() self.patchobject(res, 'lock') - res.create_convergence(self.stack.t.id, res_data, 'engine-007', + res.create_convergence(self.stack.t.id, set(), 'engine-007', self.dummy_timeout, pcb) return res @@ -4366,7 +4279,7 @@ ev.assert_called_with(res.UPDATE, res.FAILED, 'update is restricted for resource.') - def test_replace_rstricted(self): + def test_replace_restricted(self): self.env_snippet = {u'resource_registry': { u'resources': { 'bar': {'restricted_actions': 'replace'} @@ -4378,6 +4291,7 @@ res = self.create_resource() ev = self.patchobject(res, '_add_event') props = self.tmpl['resources']['bar']['properties'] + props['value'] = '4567' props['update_replace'] = True snippet = rsrc_defn.ResourceDefinition('bar', 'TestResourceType', @@ -4392,7 +4306,7 @@ ev.assert_called_with(res.UPDATE, res.FAILED, 'replace is restricted for resource.') - def test_update_with_replace_rstricted(self): + def test_update_with_replace_restricted(self): self.env_snippet = {u'resource_registry': { u'resources': { 'bar': {'restricted_actions': 'replace'} @@ -4413,7 +4327,7 @@ ev.assert_called_with(res.UPDATE, res.COMPLETE, 'state changed') - def test_replace_with_update_rstricted(self): + def test_replace_with_update_restricted(self): self.env_snippet = {u'resource_registry': { u'resources': { 'bar': {'restricted_actions': 'update'} @@ -4426,6 +4340,7 @@ ev = self.patchobject(res, '_add_event') prep_replace = self.patchobject(res, 'prepare_for_replace') props = self.tmpl['resources']['bar']['properties'] + props['value'] = '4567' props['update_replace'] = True snippet = rsrc_defn.ResourceDefinition('bar', 'TestResourceType', @@ -4455,7 +4370,7 @@ error = self.assertRaises(exception.ResourceFailure, scheduler.TaskRunner(res.update_convergence, self.stack.t.id, - {}, + set(), 'engine-007', self.dummy_timeout, self.new_stack, @@ -4487,7 +4402,7 @@ error = self.assertRaises(resource.UpdateReplace, scheduler.TaskRunner(res.update_convergence, self.stack.t.id, - {}, + set(), 'engine-007', self.dummy_timeout, self.new_stack, diff -Nru heat-11.0.0~b1/heat/tests/test_stack.py heat-11.0.0~b2/heat/tests/test_stack.py --- heat-11.0.0~b1/heat/tests/test_stack.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/test_stack.py 2018-06-07 22:12:28.000000000 +0000 @@ -21,7 +21,6 @@ import eventlet import fixtures import mock -import mox from oslo_config import cfg import six @@ -456,36 +455,33 @@ stk = stack_object.Stack.get_by_id(self.ctx, self.stack.id) t = template.Template.load(self.ctx, stk.raw_template_id) - self.m.StubOutWithMock(template.Template, 'load') - template.Template.load( - self.ctx, stk.raw_template_id, stk.raw_template - ).AndReturn(t) - - self.m.StubOutWithMock(stack.Stack, '__init__') - stack.Stack.__init__(self.ctx, stk.name, t, stack_id=stk.id, - action=stk.action, status=stk.status, - status_reason=stk.status_reason, - timeout_mins=stk.timeout, - disable_rollback=stk.disable_rollback, - parent_resource='parent', owner_id=None, - stack_user_project_id=None, - created_time=mox.IgnoreArg(), - updated_time=None, - user_creds_id=stk.user_creds_id, - tenant_id='test_tenant_id', - use_stored_context=False, - username=mox.IgnoreArg(), - convergence=False, - current_traversal=self.stack.current_traversal, - prev_raw_template_id=None, - current_deps=None, cache_data=None, - nested_depth=0, - deleted_time=None) + self.patchobject(template.Template, 'load', return_value=t) - self.m.ReplayAll() - stack.Stack.load(self.ctx, stack_id=self.stack.id) + self.patchobject(stack.Stack, '__init__', return_value=None) - self.m.VerifyAll() + stack.Stack.load(self.ctx, stack_id=self.stack.id) + stack.Stack.__init__.assert_called_once_with( + self.ctx, stk.name, t, stack_id=stk.id, + action=stk.action, status=stk.status, + status_reason=stk.status_reason, + timeout_mins=stk.timeout, + disable_rollback=stk.disable_rollback, + parent_resource='parent', owner_id=None, + stack_user_project_id=None, + created_time=mock.ANY, + updated_time=None, + user_creds_id=stk.user_creds_id, + tenant_id='test_tenant_id', + use_stored_context=False, + username=mock.ANY, + convergence=False, + current_traversal=self.stack.current_traversal, + prev_raw_template_id=None, + current_deps=None, cache_data=None, + nested_depth=0, + deleted_time=None) + template.Template.load.assert_called_once_with( + self.ctx, stk.raw_template_id, stk.raw_template) def test_identifier(self): self.stack = stack.Stack(self.ctx, 'identifier_test', self.tmpl) @@ -539,7 +535,6 @@ self.stack.parameters['AWS::StackId']) self.assertEqual(self.stack.parameters['AWS::StackId'], identifier.arn()) - self.m.VerifyAll() def test_set_param_id_update(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', @@ -758,7 +753,6 @@ self.stack.state) def test_suspend_resume(self): - self.m.ReplayAll() tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} self.stack = stack.Stack(self.ctx, 'suspend_test', @@ -782,8 +776,6 @@ self.stack.state) self.assertNotEqual(stack_suspend_time, self.stack.updated_time) - self.m.VerifyAll() - def test_suspend_stack_suspended_ok(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} @@ -799,13 +791,12 @@ self.stack.state) # unexpected to call Resource.suspend - self.m.StubOutWithMock(generic_rsrc.GenericResource, 'suspend') - self.m.ReplayAll() + self.patchobject(generic_rsrc.GenericResource, 'suspend') self.stack.suspend() self.assertEqual((self.stack.SUSPEND, self.stack.COMPLETE), self.stack.state) - self.m.VerifyAll() + generic_rsrc.GenericResource.suspend.assert_not_called() def test_resume_stack_resumeed_ok(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', @@ -826,21 +817,19 @@ self.stack.state) # unexpected to call Resource.resume - self.m.StubOutWithMock(generic_rsrc.GenericResource, 'resume') - self.m.ReplayAll() + self.patchobject(generic_rsrc.GenericResource, 'resume') self.stack.resume() self.assertEqual((self.stack.RESUME, self.stack.COMPLETE), self.stack.state) - self.m.VerifyAll() + generic_rsrc.GenericResource.resume.assert_not_called() def test_suspend_fail(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} - self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_suspend') exc = Exception('foo') - generic_rsrc.GenericResource.handle_suspend().AndRaise(exc) - self.m.ReplayAll() + self.patchobject(generic_rsrc.GenericResource, 'handle_suspend', + side_effect=exc) self.stack = stack.Stack(self.ctx, 'suspend_test_fail', template.Template(tmpl)) @@ -857,14 +846,13 @@ self.assertEqual('Resource SUSPEND failed: Exception: ' 'resources.AResource: foo', self.stack.status_reason) - self.m.VerifyAll() + generic_rsrc.GenericResource.handle_suspend.assert_called_once_with() def test_resume_fail(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} - self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_resume') - generic_rsrc.GenericResource.handle_resume().AndRaise(Exception('foo')) - self.m.ReplayAll() + self.patchobject(generic_rsrc.GenericResource, 'handle_resume', + side_effect=Exception('foo')) self.stack = stack.Stack(self.ctx, 'resume_test_fail', template.Template(tmpl)) @@ -886,15 +874,13 @@ self.assertEqual('Resource RESUME failed: Exception: ' 'resources.AResource: foo', self.stack.status_reason) - self.m.VerifyAll() def test_suspend_timeout(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} - self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_suspend') exc = scheduler.Timeout('foo', 0) - generic_rsrc.GenericResource.handle_suspend().AndRaise(exc) - self.m.ReplayAll() + self.patchobject(generic_rsrc.GenericResource, 'handle_suspend', + side_effect=exc) self.stack = stack.Stack(self.ctx, 'suspend_test_fail_timeout', template.Template(tmpl)) @@ -909,15 +895,14 @@ self.assertEqual((self.stack.SUSPEND, self.stack.FAILED), self.stack.state) self.assertEqual('Suspend timed out', self.stack.status_reason) - self.m.VerifyAll() + generic_rsrc.GenericResource.handle_suspend.assert_called_once_with() def test_resume_timeout(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', 'Resources': {'AResource': {'Type': 'GenericResourceType'}}} - self.m.StubOutWithMock(generic_rsrc.GenericResource, 'handle_resume') exc = scheduler.Timeout('foo', 0) - generic_rsrc.GenericResource.handle_resume().AndRaise(exc) - self.m.ReplayAll() + self.patchobject(generic_rsrc.GenericResource, 'handle_resume', + side_effect=exc) self.stack = stack.Stack(self.ctx, 'resume_test_fail_timeout', template.Template(tmpl)) @@ -938,7 +923,7 @@ self.stack.state) self.assertEqual('Resume timed out', self.stack.status_reason) - self.m.VerifyAll() + generic_rsrc.GenericResource.handle_resume.assert_called_once_with() def _get_stack_to_check(self, name): tpl = {"HeatTemplateFormatVersion": "2012-12-12", @@ -1192,20 +1177,15 @@ template.Template(tmpl), disable_rollback=True) - self.m.StubOutWithMock(generic_rsrc.ResourceWithFnGetRefIdType, - 'handle_create') - self.m.StubOutWithMock(generic_rsrc.ResourceWithFnGetRefIdType, - 'handle_delete') - - # create - generic_rsrc.ResourceWithFnGetRefIdType.handle_create().AndRaise( - Exception) - - # update - generic_rsrc.ResourceWithFnGetRefIdType.handle_delete() - generic_rsrc.ResourceWithFnGetRefIdType.handle_create() + class FakeException(Exception): + # to avoid pep8 check + pass - self.m.ReplayAll() + mock_create = self.patchobject(generic_rsrc.ResourceWithFnGetRefIdType, + 'handle_create', + side_effect=[FakeException, None]) + mock_delete = self.patchobject(generic_rsrc.ResourceWithFnGetRefIdType, + 'handle_delete', return_value=None) self.stack.store() self.stack.create() @@ -1226,8 +1206,8 @@ self.assertEqual( 'ID-AResource', self.stack['BResource']._stored_properties_data['Foo']) - - self.m.VerifyAll() + mock_delete.assert_called_once_with() + self.assertEqual(2, mock_create.call_count) def test_create_bad_attribute(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', @@ -1241,14 +1221,10 @@ template.Template(tmpl), disable_rollback=True) - self.m.StubOutWithMock(generic_rsrc.ResourceWithProps, - '_update_stored_properties') - - generic_rsrc.ResourceWithProps._update_stored_properties().AndRaise( - exception.InvalidTemplateAttribute(resource='a', key='foo')) - - self.m.ReplayAll() - + self.patchobject(generic_rsrc.ResourceWithProps, + '_update_stored_properties', + side_effect=exception.InvalidTemplateAttribute( + resource='a', key='foo')) self.stack.store() self.stack.create() @@ -1256,32 +1232,26 @@ self.stack.state) self.assertEqual('Resource CREATE failed: The Referenced Attribute ' '(a foo) is incorrect.', self.stack.status_reason) - self.m.VerifyAll() def test_stack_create_timeout(self): - self.m.StubOutWithMock(scheduler.DependencyTaskGroup, '__call__') - self.m.StubOutWithMock(timeutils, 'wallclock') - - stk = stack.Stack(self.ctx, 's', self.tmpl) - def dummy_task(): while True: yield - start_time = time.time() - timeutils.wallclock().AndReturn(start_time) - timeutils.wallclock().AndReturn(start_time + 1) - scheduler.DependencyTaskGroup.__call__().AndReturn(dummy_task()) - timeutils.wallclock().AndReturn(start_time + stk.timeout_secs() + 1) + self.patchobject(scheduler.DependencyTaskGroup, '__call__', + return_value=dummy_task()) - self.m.ReplayAll() + stk = stack.Stack(self.ctx, 's', self.tmpl) + start_time = time.time() + self.patchobject(timeutils, 'wallclock', + side_effect=[start_time, start_time + 1, + start_time + stk.timeout_secs() + 1]) stk.create() self.assertEqual((stack.Stack.CREATE, stack.Stack.FAILED), stk.state) self.assertEqual('Create timed out', stk.status_reason) - - self.m.VerifyAll() + self.assertEqual(3, timeutils.wallclock.call_count) def test_stack_name_valid(self): stk = stack.Stack(self.ctx, 's', self.tmpl) @@ -1509,12 +1479,9 @@ """A user_creds entry is created on first stack store.""" cfg.CONF.set_override('deferred_auth_method', 'trusts') - self.m.StubOutWithMock(keystone.KeystoneClientPlugin, '_create') - keystone.KeystoneClientPlugin._create().AndReturn( - fake_ks.FakeKeystoneClient(user_id='auser123')) - keystone.KeystoneClientPlugin._create().AndReturn( - fake_ks.FakeKeystoneClient(user_id='auser123')) - self.m.ReplayAll() + self.patchobject(keystone.KeystoneClientPlugin, '_create', + return_value=fake_ks.FakeKeystoneClient( + user_id='auser123')) self.stack = stack.Stack(self.ctx, 'creds_stack', self.tmpl) self.stack.store() @@ -1548,6 +1515,7 @@ # Store again, ID should not change self.stack.store() self.assertEqual(user_creds_id, db_stack.user_creds_id) + keystone.KeystoneClientPlugin._create.assert_called_with() def test_backup_copies_user_creds_id(self): ctx_init = utils.dummy_context(user='my_user', @@ -1685,7 +1653,6 @@ def test_stack_user_project_id_constructor(self): self.stub_keystoneclient() - self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'user_project_init', self.tmpl, @@ -1698,11 +1665,9 @@ self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) - self.m.VerifyAll() def test_stack_user_project_id_setter(self): self.stub_keystoneclient() - self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'user_project_init', self.tmpl) self.stack.store() @@ -1715,11 +1680,9 @@ self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) - self.m.VerifyAll() def test_stack_user_project_id_create(self): self.stub_keystoneclient() - self.m.ReplayAll() self.stack = stack.Stack(self.ctx, 'user_project_init', self.tmpl) self.stack.store() @@ -1733,7 +1696,6 @@ self.stack.delete() self.assertEqual((stack.Stack.DELETE, stack.Stack.COMPLETE), self.stack.state) - self.m.VerifyAll() def test_preview_resources_returns_list_of_resource_previews(self): tmpl = {'HeatTemplateFormatVersion': '2012-12-12', @@ -1827,16 +1789,13 @@ # Mock objects so the query for flavors in server.FlavorConstraint # works for stack creation fc = fakes.FakeClient() - self.m.StubOutWithMock(nova.NovaClientPlugin, '_create') - nova.NovaClientPlugin._create().AndReturn(fc) + self.patchobject(nova.NovaClientPlugin, '_create', return_value=fc) - fc.flavors = self.m.CreateMockAnything() + fc.flavors = mock.Mock() flavor = collections.namedtuple("Flavor", ["id", "name"]) flavor.id = "1234" flavor.name = "dummy" - fc.flavors.get('1234').AndReturn(flavor) - - self.m.ReplayAll() + fc.flavors.get.return_value = flavor test_env = environment.Environment({'flavor': '1234'}) self.stack = stack.Stack(self.ctx, 'stack_with_custom_constraint', @@ -1847,18 +1806,12 @@ self.stack.create() stack_id = self.stack.id - self.m.VerifyAll() - self.assertEqual((stack.Stack.CREATE, stack.Stack.COMPLETE), self.stack.state) loaded_stack = stack.Stack.load(self.ctx, stack_id=self.stack.id) self.assertEqual(stack_id, loaded_stack.parameters['OS::stack_id']) - - # verify that fc.flavors.list() has not been called, i.e. verify that - # parameter value validation did not happen and FlavorConstraint was - # not invoked - self.m.VerifyAll() + fc.flavors.get.assert_called_once_with('1234') def test_snapshot_delete(self): snapshots = [] @@ -2844,12 +2797,9 @@ tmpl, disable_rollback=disable_rollback) self.stack.store() - self.m.ReplayAll() rb = self.stack._update_exception_handler(exc=exc, action=action) - self.m.VerifyAll() - return rb def test_update_exception_handler_resource_failure_no_rollback(self): diff -Nru heat-11.0.0~b1/heat/tests/test_template.py heat-11.0.0~b2/heat/tests/test_template.py --- heat-11.0.0~b1/heat/tests/test_template.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/test_template.py 2018-06-07 22:12:28.000000000 +0000 @@ -734,8 +734,9 @@ valid_versions = ['2013-05-23', '2014-10-16', '2015-04-30', '2015-10-15', '2016-04-08', '2016-10-14', '2017-02-24', '2017-09-01', - '2018-03-02', - 'newton', 'ocata', 'pike', 'queens'] + '2018-03-02', '2018-08-31', + 'newton', 'ocata', 'pike', + 'queens', 'rocky'] ex_error_msg = ('The template version is invalid: ' '"heat_template_version: 2012-12-12". ' '"heat_template_version" should be one of: %s' diff -Nru heat-11.0.0~b1/heat/tests/test_vpc.py heat-11.0.0~b2/heat/tests/test_vpc.py --- heat-11.0.0~b1/heat/tests/test_vpc.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat/tests/test_vpc.py 2018-06-07 22:12:34.000000000 +0000 @@ -34,22 +34,38 @@ def setUp(self): super(VPCTestBase, self).setUp() - self.m.StubOutWithMock(neutronclient.Client, 'add_interface_router') - self.m.StubOutWithMock(neutronclient.Client, 'add_gateway_router') - self.m.StubOutWithMock(neutronclient.Client, 'create_network') + self.m_add_interface_router = self.patchobject( + neutronclient.Client, 'add_interface_router', + return_value=None) + self.m_add_gateway_router = self.patchobject( + neutronclient.Client, 'add_gateway_router') + self.m_create_network = self.patchobject(neutronclient.Client, + 'create_network') self.m.StubOutWithMock(neutronclient.Client, 'create_port') - self.m.StubOutWithMock(neutronclient.Client, 'create_router') - self.m.StubOutWithMock(neutronclient.Client, 'create_subnet') - self.m.StubOutWithMock(neutronclient.Client, 'delete_network') + self.m_create_router = self.patchobject(neutronclient.Client, + 'create_router') + self.m_create_subnet = self.patchobject(neutronclient.Client, + 'create_subnet') + self.m_delete_network = self.patchobject(neutronclient.Client, + 'delete_network', + return_value=None) self.m.StubOutWithMock(neutronclient.Client, 'delete_port') - self.m.StubOutWithMock(neutronclient.Client, 'delete_router') - self.m.StubOutWithMock(neutronclient.Client, 'delete_subnet') + self.m_delete_router = self.patchobject(neutronclient.Client, + 'delete_router', + return_value=None) + self.m_delete_subnet = self.patchobject(neutronclient.Client, + 'delete_subnet', + return_value=None) self.m.StubOutWithMock(neutronclient.Client, 'list_networks') - self.m.StubOutWithMock(neutronclient.Client, 'list_routers') + self.m_list_routers = self.patchobject(neutronclient.Client, + 'list_routers') self.m.StubOutWithMock(neutronclient.Client, 'remove_gateway_router') - self.m.StubOutWithMock(neutronclient.Client, 'remove_interface_router') + self.m_remove_interface_router = self.patchobject( + neutronclient.Client, 'remove_interface_router', + return_value=None) self.m.StubOutWithMock(neutronclient.Client, 'show_subnet') - self.m.StubOutWithMock(neutronclient.Client, 'show_network') + self.m_show_network = self.patchobject(neutronclient.Client, + 'show_network') self.m.StubOutWithMock(neutronclient.Client, 'show_port') self.m.StubOutWithMock(neutronclient.Client, 'show_router') self.m.StubOutWithMock(neutronclient.Client, 'create_security_group') @@ -75,12 +91,16 @@ stack.store() return stack + def validate_mock_create_network(self): + self.m_show_network.assert_called_with('aaaa') + self.m_create_network.assert_called_once_with({ + 'network': {'name': self.vpc_name}}) + self.m_create_router.assert_called_once() + def mock_create_network(self): self.vpc_name = utils.PhysName('test_stack', 'the_vpc') - neutronclient.Client.create_network( - { - 'network': {'name': self.vpc_name} - }).AndReturn({'network': { + self.m_create_network.return_value = { + 'network': { 'status': 'BUILD', 'subnets': [], 'name': 'name', @@ -88,70 +108,51 @@ 'shared': False, 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', 'id': 'aaaa' - }}) - neutronclient.Client.show_network( - 'aaaa' - ).AndReturn({"network": { - "status": "BUILD", - "subnets": [], - "name": self.vpc_name, - "admin_state_up": False, - "shared": False, - "tenant_id": "c1210485b2424d48804aad5d39c61b8f", - "id": "aaaa" - }}) - - neutronclient.Client.show_network( - 'aaaa' - ).MultipleTimes().AndReturn({"network": { - "status": "ACTIVE", - "subnets": [], - "name": self.vpc_name, - "admin_state_up": False, - "shared": False, - "tenant_id": "c1210485b2424d48804aad5d39c61b8f", - "id": "aaaa" - }}) - neutronclient.Client.create_router( - {'router': {'name': self.vpc_name}}).AndReturn({ - 'router': { - 'status': 'BUILD', - 'name': self.vpc_name, - 'admin_state_up': True, - 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', - 'id': 'bbbb' - }}) - neutronclient.Client.list_routers(name=self.vpc_name).AndReturn({ - "routers": [{ + }} + show_network_returns = [ + {"network": { "status": "BUILD", - "external_gateway_info": None, + "subnets": [], "name": self.vpc_name, - "admin_state_up": True, - "tenant_id": "3e21026f2dc94372b105808c0e721661", - "routes": [], - "id": "bbbb" - }] - }) + "admin_state_up": False, + "shared": False, + "tenant_id": "c1210485b2424d48804aad5d39c61b8f", + "id": "aaaa" + }}] + for i in range(3): + show_network_returns.append( + {"network": { + "status": "ACTIVE", + "subnets": [], + "name": self.vpc_name, + "admin_state_up": False, + "shared": False, + "tenant_id": "c1210485b2424d48804aad5d39c61b8f", + "id": "aaaa" + }}) + self.m_show_network.side_effect = show_network_returns + + self.m_create_router.return_value = { + 'router': { + 'status': 'BUILD', + 'name': self.vpc_name, + 'admin_state_up': True, + 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', + 'id': 'bbbb' + }} self.mock_router_for_vpc() def mock_create_subnet(self): self.subnet_name = utils.PhysName('test_stack', 'the_subnet') - neutronclient.Client.create_subnet( - {'subnet': { - 'network_id': u'aaaa', - 'cidr': u'10.0.0.0/24', - 'ip_version': 4, - 'name': self.subnet_name}}).AndReturn({ - 'subnet': { - 'status': 'ACTIVE', - 'name': self.subnet_name, - 'admin_state_up': True, - 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', - 'id': 'cccc'}}) + self.m_create_subnet.return_value = { + 'subnet': { + 'status': 'ACTIVE', + 'name': self.subnet_name, + 'admin_state_up': True, + 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', + 'id': 'cccc'}} + self.mock_router_for_vpc() - neutronclient.Client.add_interface_router( - u'bbbb', - {'subnet_id': 'cccc'}).AndReturn(None) def mock_show_subnet(self): neutronclient.Client.show_subnet('cccc').AndReturn({ @@ -249,7 +250,7 @@ '0389f747-7785-4757-b7bb-2ab07e4b09c3').AndReturn(None) def mock_router_for_vpc(self): - neutronclient.Client.list_routers(name=self.vpc_name).AndReturn({ + self.m_list_routers.return_value = { "routers": [{ "status": "ACTIVE", "external_gateway_info": { @@ -261,32 +262,23 @@ "routes": [], "id": "bbbb" }] - }) - - def mock_delete_network(self): - self.mock_router_for_vpc() - neutronclient.Client.delete_router('bbbb').AndReturn(None) - neutronclient.Client.delete_network('aaaa').AndReturn(None) + } def mock_delete_subnet(self): + # TODO(ricolin) remove this func once we all move to mock self.mock_router_for_vpc() - neutronclient.Client.remove_interface_router( - u'bbbb', - {'subnet_id': 'cccc'}).AndReturn(None) - neutronclient.Client.delete_subnet('cccc').AndReturn(None) def mock_create_route_table(self): self.rt_name = utils.PhysName('test_stack', 'the_route_table') - neutronclient.Client.create_router({ - 'router': {'name': self.rt_name}}).AndReturn({ - 'router': { - 'status': 'BUILD', - 'name': self.rt_name, - 'admin_state_up': True, - 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', - 'id': 'ffff' - } - }) + self.m_create_router.return_value = { + 'router': { + 'status': 'BUILD', + 'name': self.rt_name, + 'admin_state_up': True, + 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', + 'id': 'ffff' + } + } neutronclient.Client.show_router('ffff').AndReturn({ 'router': { 'status': 'BUILD', @@ -306,31 +298,18 @@ } }) self.mock_router_for_vpc() - neutronclient.Client.add_gateway_router( - 'ffff', {'network_id': 'zzzz'}).AndReturn(None) def mock_create_association(self): + # TODO(ricolin) merge mock_create_association and + # mock_delete_association func once we all move to mock self.mock_show_subnet() self.mock_router_for_vpc() - neutronclient.Client.remove_interface_router( - 'bbbb', - {'subnet_id': u'cccc'}).AndReturn(None) - neutronclient.Client.add_interface_router( - u'ffff', - {'subnet_id': 'cccc'}).AndReturn(None) def mock_delete_association(self): self.mock_show_subnet() self.mock_router_for_vpc() - neutronclient.Client.remove_interface_router( - 'ffff', - {'subnet_id': u'cccc'}).AndReturn(None) - neutronclient.Client.add_interface_router( - u'bbbb', - {'subnet_id': 'cccc'}).AndReturn(None) def mock_delete_route_table(self): - neutronclient.Client.delete_router('ffff').AndReturn(None) neutronclient.Client.remove_gateway_router('ffff').AndReturn(None) def assertResourceState(self, resource, ref_id): @@ -351,34 +330,37 @@ def mock_create_network_failed(self): self.vpc_name = utils.PhysName('test_stack', 'the_vpc') - neutronclient.Client.create_network( - { - 'network': {'name': self.vpc_name} - }).AndRaise(neutron_exc.NeutronClientException()) + self.m_create_network.side_effect = neutron_exc.NeutronClientException def test_vpc(self): self.mock_create_network() - self.mock_delete_network() - self.m.ReplayAll() stack = self.create_stack(self.test_template) vpc = stack['the_vpc'] self.assertResourceState(vpc, 'aaaa') + self.validate_mock_create_network() + self.assertEqual(3, self.m_show_network.call_count) scheduler.TaskRunner(vpc.delete)() - self.m.VerifyAll() + + self.m_show_network.assert_called_with('aaaa') + self.assertEqual(4, self.m_show_network.call_count) + self.assertEqual(2, self.m_list_routers.call_count) + self.m_list_routers.assert_called_with(name=self.vpc_name) + self.m_delete_router.assert_called_once_with('bbbb') + self.m_delete_network.assert_called_once_with('aaaa') def test_vpc_delete_successful_if_created_failed(self): self.mock_create_network_failed() - self.m.ReplayAll() t = template_format.parse(self.test_template) stack = self.parse_stack(t) scheduler.TaskRunner(stack.create)() self.assertEqual((stack.CREATE, stack.FAILED), stack.state) + self.m_create_network.assert_called_once_with( + {'network': {'name': self.vpc_name}}) scheduler.TaskRunner(stack.delete)() - - self.m.VerifyAll() + self.m_delete_network.assert_not_called() class SubnetTest(VPCTestBase): @@ -401,59 +383,65 @@ self.mock_create_network() self.mock_create_subnet() self.mock_delete_subnet() - self.mock_delete_network() # mock delete subnet which is already deleted self.mock_router_for_vpc() - neutronclient.Client.remove_interface_router( - u'bbbb', - {'subnet_id': 'cccc'}).AndRaise( - neutron_exc.NeutronClientException(status_code=404)) - neutronclient.Client.delete_subnet('cccc').AndRaise( - neutron_exc.NeutronClientException(status_code=404)) + exc = neutron_exc.NeutronClientException(status_code=404) + self.m_remove_interface_router.side_effect = exc + self.m_delete_subnet.side_effect = neutron_exc.NeutronClientException( + status_code=404) - self.m.ReplayAll() stack = self.create_stack(self.test_template) subnet = stack['the_subnet'] + self.assertResourceState(subnet, 'cccc') + self.m_list_routers.assert_called_with(name=self.vpc_name) + + self.validate_mock_create_network() + self.m_add_interface_router.assert_called_once_with( + u'bbbb', {'subnet_id': 'cccc'}) + self.m_create_subnet.assert_called_once_with( + {'subnet': { + 'network_id': u'aaaa', + 'cidr': u'10.0.0.0/24', + 'ip_version': 4, + 'name': self.subnet_name}}) + self.assertEqual(4, self.m_show_network.call_count) self.assertRaises( exception.InvalidTemplateAttribute, subnet.FnGetAtt, 'Foo') - self.assertEqual('moon', subnet.FnGetAtt('AvailabilityZone')) scheduler.TaskRunner(subnet.delete)() subnet.state_set(subnet.CREATE, subnet.COMPLETE, 'to delete again') scheduler.TaskRunner(subnet.delete)() scheduler.TaskRunner(stack['the_vpc'].delete)() - self.m.VerifyAll() + + self.m_show_network.assert_called_with('aaaa') + self.m_list_routers.assert_called_with(name=self.vpc_name) + self.assertEqual(2, self.m_list_routers.call_count) + + self.assertEqual(7, self.m_show_network.call_count) def _mock_create_subnet_failed(self, stack_name): self.subnet_name = utils.PhysName(stack_name, 'the_subnet') - neutronclient.Client.create_subnet( - {'subnet': { - 'network_id': u'aaaa', - 'cidr': u'10.0.0.0/24', - 'ip_version': 4, - 'name': self.subnet_name}}).AndReturn({ - 'subnet': { - 'status': 'ACTIVE', - 'name': self.subnet_name, - 'admin_state_up': True, - 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', - 'id': 'cccc'}}) + self.m_create_subnet.return_value = { + 'subnet': { + 'status': 'ACTIVE', + 'name': self.subnet_name, + 'admin_state_up': True, + 'tenant_id': 'c1210485b2424d48804aad5d39c61b8f', + 'id': 'cccc'}} - neutronclient.Client.show_network('aaaa').MultipleTimes().AndRaise( - neutron_exc.NeutronClientException(status_code=404)) + self.m_show_network.side_effect = neutron_exc.NeutronClientException( + status_code=404) def test_create_failed_delete_success(self): stack_name = 'test_subnet_' self._mock_create_subnet_failed(stack_name) - neutronclient.Client.delete_subnet('cccc').AndReturn(None) - self.m.ReplayAll() t = template_format.parse(self.test_template) tmpl = template.Template(t) @@ -470,9 +458,20 @@ self.assertEqual((rsrc.CREATE, rsrc.FAILED), rsrc.state) ref_id = rsrc.FnGetRefId() self.assertEqual(u'cccc', ref_id) + + self.m_create_subnet.assert_called_once_with( + {'subnet': { + 'network_id': u'aaaa', + 'cidr': u'10.0.0.0/24', + 'ip_version': 4, + 'name': self.subnet_name}}) + self.assertEqual(1, self.m_show_network.call_count) + self.assertIsNone(scheduler.TaskRunner(rsrc.delete)()) self.assertEqual((rsrc.DELETE, rsrc.COMPLETE), rsrc.state) - self.m.VerifyAll() + + self.assertEqual(2, self.m_show_network.call_count) + self.m_delete_subnet.assert_called_once_with('cccc') class NetworkInterfaceTest(VPCTestBase): @@ -651,7 +650,6 @@ self.mock_show_network_interface() self.mock_delete_network_interface() self.mock_delete_subnet() - self.mock_delete_network() self.mock_delete_security_group() self.m.ReplayAll() @@ -678,7 +676,6 @@ self.mock_create_network_interface() self.mock_delete_network_interface() self.mock_delete_subnet() - self.mock_delete_network() self.mock_delete_security_group() self.m.ReplayAll() @@ -701,7 +698,6 @@ self.mock_create_network_interface(security_groups=None) self.mock_delete_network_interface() self.mock_delete_subnet() - self.mock_delete_network() self.m.ReplayAll() @@ -763,11 +759,6 @@ 'id': '0389f747-7785-4757-b7bb-2ab07e4b09c3' }]}) - def mock_create_gateway_attachment(self): - neutronclient.Client.add_gateway_router( - 'ffff', {'network_id': '0389f747-7785-4757-b7bb-2ab07e4b09c3'} - ).AndReturn(None) - def mock_delete_gateway_attachment(self): neutronclient.Client.remove_gateway_router('ffff').AndReturn(None) @@ -778,19 +769,23 @@ self.mock_create_route_table() self.stub_SubnetConstraint_validate() self.mock_create_association() - self.mock_create_gateway_attachment() self.mock_delete_gateway_attachment() self.mock_delete_association() self.mock_delete_route_table() self.mock_delete_subnet() - self.mock_delete_network() self.m.ReplayAll() stack = self.create_stack(self.test_template) + self.m_create_router.assert_called_with( + {'router': {'name': self.rt_name}}) + self.m_add_interface_router.assert_called_once_with( + u'bbbb', {'subnet_id': 'cccc'}) gateway = stack['the_gateway'] self.assertResourceState(gateway, gateway.physical_resource_name()) + self.m_add_gateway_router.assert_called_once_with( + 'ffff', {'network_id': '0389f747-7785-4757-b7bb-2ab07e4b09c3'}) attachment = stack['the_attachment'] self.assertResourceState(attachment, 'the_attachment') @@ -799,6 +794,10 @@ self.assertEqual([route_table], list(attachment._vpc_route_tables())) stack.delete() + self.m_remove_interface_router.assert_called_with( + 'ffff', + {'subnet_id': u'cccc'}) + self.m_delete_router.assert_called_once_with('ffff') self.m.VerifyAll() @@ -837,11 +836,14 @@ self.mock_delete_association() self.mock_delete_route_table() self.mock_delete_subnet() - self.mock_delete_network() self.m.ReplayAll() stack = self.create_stack(self.test_template) + self.m_create_router.assert_called_with( + {'router': {'name': self.rt_name}}) + self.m_add_interface_router.assert_called_once_with( + u'bbbb', {'subnet_id': 'cccc'}) route_table = stack['the_route_table'] self.assertResourceState(route_table, 'ffff') @@ -853,4 +855,8 @@ scheduler.TaskRunner(route_table.delete)() stack.delete() + self.m_remove_interface_router.assert_called_with( + 'ffff', + {'subnet_id': u'cccc'}) + self.m_delete_router.assert_called_once_with('ffff') self.m.VerifyAll() diff -Nru heat-11.0.0~b1/heat.egg-info/entry_points.txt heat-11.0.0~b2/heat.egg-info/entry_points.txt --- heat-11.0.0~b1/heat.egg-info/entry_points.txt 2018-04-19 19:39:43.000000000 +0000 +++ heat-11.0.0~b2/heat.egg-info/entry_points.txt 2018-06-07 22:15:39.000000000 +0000 @@ -124,10 +124,12 @@ heat_template_version.2017-02-24 = heat.engine.hot.template:HOTemplate20170224 heat_template_version.2017-09-01 = heat.engine.hot.template:HOTemplate20170901 heat_template_version.2018-03-02 = heat.engine.hot.template:HOTemplate20180302 +heat_template_version.2018-08-31 = heat.engine.hot.template:HOTemplate20180831 heat_template_version.newton = heat.engine.hot.template:HOTemplate20161014 heat_template_version.ocata = heat.engine.hot.template:HOTemplate20170224 heat_template_version.pike = heat.engine.hot.template:HOTemplate20170901 heat_template_version.queens = heat.engine.hot.template:HOTemplate20180302 +heat_template_version.rocky = heat.engine.hot.template:HOTemplate20180831 [oslo.config.opts] heat.api.aws.ec2token = heat.api.aws.ec2token:list_opts diff -Nru heat-11.0.0~b1/heat.egg-info/pbr.json heat-11.0.0~b2/heat.egg-info/pbr.json --- heat-11.0.0~b1/heat.egg-info/pbr.json 2018-04-19 19:39:43.000000000 +0000 +++ heat-11.0.0~b2/heat.egg-info/pbr.json 2018-06-07 22:15:39.000000000 +0000 @@ -1 +1 @@ -{"git_version": "dd93b23", "is_release": true} \ No newline at end of file +{"git_version": "8f14f69", "is_release": true} \ No newline at end of file diff -Nru heat-11.0.0~b1/heat.egg-info/PKG-INFO heat-11.0.0~b2/heat.egg-info/PKG-INFO --- heat-11.0.0~b1/heat.egg-info/PKG-INFO 2018-04-19 19:39:43.000000000 +0000 +++ heat-11.0.0~b2/heat.egg-info/PKG-INFO 2018-06-07 22:15:39.000000000 +0000 @@ -1,8 +1,8 @@ Metadata-Version: 1.1 Name: heat -Version: 11.0.0.0b1 +Version: 11.0.0.0b2 Summary: OpenStack Orchestration -Home-page: http://docs.openstack.org/developer/heat/ +Home-page: https://docs.openstack.org/heat/latest/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN @@ -10,8 +10,8 @@ Team and repository tags ======================== - .. image:: http://governance.openstack.org/badges/heat.svg - :target: http://governance.openstack.org/reference/tags/index.html + .. image:: https://governance.openstack.org/tc/badges/heat.svg + :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on @@ -33,31 +33,32 @@ git clone https://git.openstack.org/openstack/heat - * Wiki: http://wiki.openstack.org/Heat - * Developer docs: http://docs.openstack.org/heat/latest + * Documentation: https://docs.openstack.org/heat/latest * Template samples: https://git.openstack.org/cgit/openstack/heat-templates * Agents: https://git.openstack.org/cgit/openstack/heat-agents Python client ------------- - https://git.openstack.org/cgit/openstack/python-heatclient + + * Documentation: https://docs.openstack.org/python-heatclient/latest + * Source: https://git.openstack.org/cgit/openstack/python-heatclient References ---------- - * http://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html - * http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/create-stack.html - * http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html - * http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca + * https://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html + * https://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/create-stack.html + * https://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html + * https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca We have integration with ------------------------ * https://git.openstack.org/cgit/openstack/python-novaclient (instance) * https://git.openstack.org/cgit/openstack/python-keystoneclient (auth) - * https://git.openstack.org/cgit/openstack/python-swiftclient (s3) + * https://git.openstack.org/cgit/openstack/python-swiftclient (object storage) * https://git.openstack.org/cgit/openstack/python-neutronclient (networking) * https://git.openstack.org/cgit/openstack/python-ceilometerclient (metering) * https://git.openstack.org/cgit/openstack/python-aodhclient (alarming service) - * https://git.openstack.org/cgit/openstack/python-cinderclient (storage service) + * https://git.openstack.org/cgit/openstack/python-cinderclient (block storage) * https://git.openstack.org/cgit/openstack/python-glanceclient (image service) * https://git.openstack.org/cgit/openstack/python-troveclient (database as a Service) * https://git.openstack.org/cgit/openstack/python-saharaclient (hadoop cluster) @@ -68,6 +69,7 @@ * https://git.openstack.org/cgit/openstack/python-mistralclient (workflow service) * https://git.openstack.org/cgit/openstack/python-zaqarclient (messaging service) * https://git.openstack.org/cgit/openstack/python-monascaclient (monitoring service) + * https://git.openstack.org/cgit/openstack/python-zunclient (container management service) Platform: UNKNOWN diff -Nru heat-11.0.0~b1/heat.egg-info/requires.txt heat-11.0.0~b2/heat.egg-info/requires.txt --- heat-11.0.0~b1/heat.egg-info/requires.txt 2018-04-19 19:39:43.000000000 +0000 +++ heat-11.0.0~b2/heat.egg-info/requires.txt 2018-06-07 22:15:39.000000000 +0000 @@ -2,7 +2,7 @@ Babel!=2.4.0,>=2.3.4 croniter>=0.3.4 cryptography>=2.1 -eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 +eventlet!=0.18.3,!=0.20.1,>=0.18.2 keystoneauth1>=3.4.0 keystonemiddleware>=4.17.0 lxml!=3.7.0,>=3.4.1 diff -Nru heat-11.0.0~b1/heat.egg-info/SOURCES.txt heat-11.0.0~b2/heat.egg-info/SOURCES.txt --- heat-11.0.0~b1/heat.egg-info/SOURCES.txt 2018-04-19 19:39:44.000000000 +0000 +++ heat-11.0.0~b2/heat.egg-info/SOURCES.txt 2018-06-07 22:15:42.000000000 +0000 @@ -859,6 +859,7 @@ heat/tests/convergence/scenarios/update_replace_rollback.py heat/tests/convergence/scenarios/update_user_replace.py heat/tests/convergence/scenarios/update_user_replace_rollback.py +heat/tests/convergence/scenarios/update_user_replace_rollback_update.py heat/tests/db/__init__.py heat/tests/db/test_migrations.py heat/tests/db/test_sqlalchemy_api.py @@ -1046,7 +1047,6 @@ heat_integrationtests/__init__.py heat_integrationtests/cleanup_test_env.sh heat_integrationtests/config-generator.conf -heat_integrationtests/install-requirements heat_integrationtests/post_test_hook.sh heat_integrationtests/pre_test_hook.sh heat_integrationtests/prepare_test_env.sh @@ -1094,6 +1094,7 @@ heat_integrationtests/locale/ko_KR/LC_MESSAGES/heat_integrationtests.po heat_upgradetests/post_test_hook.sh heat_upgradetests/pre_test_hook.sh +playbooks/get_amphora_tarball.yaml playbooks/devstack/functional/post.yaml playbooks/devstack/functional/run.yaml playbooks/devstack/grenade/run.yaml diff -Nru heat-11.0.0~b1/heat_integrationtests/functional/test_resource_group.py heat-11.0.0~b2/heat_integrationtests/functional/test_resource_group.py --- heat-11.0.0~b1/heat_integrationtests/functional/test_resource_group.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat_integrationtests/functional/test_resource_group.py 2018-06-07 22:12:28.000000000 +0000 @@ -612,6 +612,36 @@ created=10, deleted=10) + def test_resource_group_update_replace_template_changed(self): + """Test rolling update(replace)with child template path changed. + + Simple rolling update replace with child template path changed. + """ + + nested_templ = ''' +heat_template_version: "2013-05-23" +resources: + oops: + type: OS::Heat::TestResource +''' + + create_template = yaml.safe_load(copy.deepcopy(self.template)) + grp = create_template['resources']['random_group'] + grp['properties']['resource_def'] = {'type': '/opt/provider.yaml'} + files = {'/opt/provider.yaml': nested_templ} + + policy = grp['update_policy']['rolling_update'] + policy['min_in_service'] = '1' + policy['max_batch_size'] = '3' + stack_identifier = self.stack_create(template=create_template, + files=files) + update_template = create_template.copy() + grp = update_template['resources']['random_group'] + grp['properties']['resource_def'] = {'type': '/opt1/provider.yaml'} + files = {'/opt1/provider.yaml': nested_templ} + + self.update_stack(stack_identifier, update_template, files=files) + def test_resource_group_update_scaledown(self): """Test rolling update with scaledown. diff -Nru heat-11.0.0~b1/heat_integrationtests/functional/test_template_versions.py heat-11.0.0~b2/heat_integrationtests/functional/test_template_versions.py --- heat-11.0.0~b1/heat_integrationtests/functional/test_template_versions.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat_integrationtests/functional/test_template_versions.py 2018-06-07 22:12:28.000000000 +0000 @@ -25,7 +25,8 @@ "2016-04-08", "2016-10-14", "newton", "2017-02-24", "ocata", "2017-09-01", "pike", - "2018-03-02", "queens"] + "2018-03-02", "queens", + "2018-08-31", "rocky"] for template in template_versions: self.assertIn(template.version.split(".")[1], supported_template_versions) diff -Nru heat-11.0.0~b1/heat_integrationtests/functional/test_update_restricted.py heat-11.0.0~b2/heat_integrationtests/functional/test_update_restricted.py --- heat-11.0.0~b1/heat_integrationtests/functional/test_update_restricted.py 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat_integrationtests/functional/test_update_restricted.py 2018-06-07 22:12:28.000000000 +0000 @@ -10,6 +10,7 @@ # License for the specific language governing permissions and limitations # under the License. +import copy import time from heat_integrationtests.functional import functional_base @@ -57,7 +58,7 @@ def test_update(self): stack_identifier = self.stack_create(template=test_template) - update_template = test_template.copy() + update_template = copy.deepcopy(test_template) props = update_template['resources']['bar']['properties'] props['value'] = '4567' @@ -94,8 +95,9 @@ def test_replace(self): stack_identifier = self.stack_create(template=test_template) - update_template = test_template.copy() + update_template = copy.deepcopy(test_template) props = update_template['resources']['bar']['properties'] + props['value'] = '4567' props['update_replace'] = True # check replace fails - with 'both' restricted @@ -131,7 +133,7 @@ def test_update_type_changed(self): stack_identifier = self.stack_create(template=test_template) - update_template = test_template.copy() + update_template = copy.deepcopy(test_template) rsrc = update_template['resources']['bar'] rsrc['type'] = 'OS::Heat::None' diff -Nru heat-11.0.0~b1/heat_integrationtests/install-requirements heat-11.0.0~b2/heat_integrationtests/install-requirements --- heat-11.0.0~b1/heat_integrationtests/install-requirements 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat_integrationtests/install-requirements 1970-01-01 00:00:00.000000000 +0000 @@ -1,66 +0,0 @@ -#!/usr/bin/env python -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -"""Generate and install requirements from stub file and source files.""" - -import argparse -import subprocess -import sys - -import pip - -parser = argparse.ArgumentParser(description='Sync requirements.') -parser.add_argument('--stub', metavar='STUBFILE', - required=True, - help="File with requirements stubs.") -parser.add_argument('--source', metavar='SOURCE', - required=True, action='append', - help="Source file to sync requirements from. " - "May be supplied several times.") -parser.add_argument('-t', '--target', metavar='TARGET', - required=True, - help="Target file to write synced requirements to.") -parser.add_argument('pipopts', metavar='PIP OPTIONS', - nargs=argparse.REMAINDER, - help='Options to pass to "pip install".') - -args = parser.parse_args() - -sources = {} -for requirements_file in args.source: - rqs = pip.req.req_file.parse_requirements(requirements_file, - session=False) - sources.update({s.name: s for s in rqs}) -stubs = list(pip.req.req_file.parse_requirements(args.stub, - session=False)) -reqs = [] -for r in stubs: - if r.name in sources: - # safe-guard for future additions to stub file - if r.specifier: - sys.exit("ERROR: package '%(pkg)s' in stub file %(stub)s " - "has version specified but is also present " - "in provided sources requirements. " - "Please remove version from the stub file." % { - 'pkg': r.name, 'stub': args.stub}) - reqs.append(sources[r.name]) - else: - reqs.append(r) - -with open(args.target, 'w') as target: - target.write('\n'.join([str(r.req) for r in reqs])) - -pip_install = ['pip', 'install', '-r', args.target] -pip_install.extend(args.pipopts) - -sys.exit(subprocess.call(pip_install)) diff -Nru heat-11.0.0~b1/heat_integrationtests/prepare_test_env.sh heat-11.0.0~b2/heat_integrationtests/prepare_test_env.sh --- heat-11.0.0~b1/heat_integrationtests/prepare_test_env.sh 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat_integrationtests/prepare_test_env.sh 2018-06-07 22:12:34.000000000 +0000 @@ -50,6 +50,7 @@ iniset $conf_file heat_plugin image_ref Fedora-Cloud-Base-27-1.6.x86_64 iniset $conf_file heat_plugin minimal_image_ref cirros-0.3.5-x86_64-disk + iniset $conf_file heat_plugin hidden_stack_tag hidden if [ "$DISABLE_CONVERGENCE" == "true" ]; then iniset $conf_file heat_plugin convergence_engine_enabled false @@ -78,8 +79,12 @@ # Skip VolumeBackupRestoreIntegrationTest skipped until failure rate can be reduced ref bug #1382300 # Skip test_server_signal_userdata_format_software_config is skipped untill bug #1651768 is resolved - iniset $conf_file heat_plugin skip_scenario_test_list 'SoftwareConfigIntegrationTest, VolumeBackupRestoreIntegrationTest' - iniset $conf_file heat_plugin skip_functional_test_list '' + # Skip AutoscalingLoadBalancerTest and AutoscalingLoadBalancerv2Test as deprecated neutron-lbaas service is not enabled + iniset $conf_file heat_plugin skip_scenario_test_list 'AutoscalingLoadBalancerTest, AutoscalingLoadBalancerv2Test, \ + SoftwareConfigIntegrationTest, VolumeBackupRestoreIntegrationTest' + + # Skip LoadBalancerv2Test as deprecated neutron-lbaas service is not enabled + iniset $conf_file heat_plugin skip_functional_test_list 'LoadBalancerv2Test' cat $conf_file } diff -Nru heat-11.0.0~b1/heat_integrationtests/pre_test_hook.sh heat-11.0.0~b2/heat_integrationtests/pre_test_hook.sh --- heat-11.0.0~b1/heat_integrationtests/pre_test_hook.sh 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/heat_integrationtests/pre_test_hook.sh 2018-06-07 22:12:34.000000000 +0000 @@ -51,9 +51,3 @@ echo "CEILOMETER_PIPELINE_INTERVAL=60" >> $localconf echo "HEAT_ENABLE_ADOPT_ABANDON=True" >> $localconf -# Use the lbaas v2 namespace driver for devstack integration testing since -# octavia uses nested vms. -if [[ $OVERRIDE_ENABLED_SERVICES =~ "q-lbaasv2" ]] -then - echo "NEUTRON_LBAAS_SERVICE_PROVIDERV2=LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default" >> $localconf -fi diff -Nru heat-11.0.0~b1/lower-constraints.txt heat-11.0.0~b2/lower-constraints.txt --- heat-11.0.0~b1/lower-constraints.txt 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/lower-constraints.txt 2018-06-07 22:12:28.000000000 +0000 @@ -1,10 +1,10 @@ alembic==0.9.8 amqp==2.2.2 -aodhclient==1.0.0 +aodhclient==0.9.0 appdirs==1.4.3 asn1crypto==0.24.0 -Babel==2.5.3 -bandit==1.4.0 +Babel==2.3.4 +bandit==1.1.0 bcrypt==3.1.4 cachetools==2.0.1 certifi==2018.1.18 @@ -13,17 +13,17 @@ cliff==2.11.0 cmd2==0.8.1 contextlib2==0.5.5 -coverage==4.5.1 -croniter==0.3.20 -cryptography==2.1.4 +coverage==4.0 +croniter==0.3.4 +cryptography==2.1 debtcollector==1.19.0 decorator==4.2.1 deprecation==2.0 -docker==3.1.1 docker-pycreds==0.2.2 +docker==3.1.1 dogpile.cache==0.6.5 enum-compat==0.0.2 -eventlet==0.20.0 +eventlet==0.18.2 extras==1.0.0 fasteners==0.14.1 fixtures==3.0.0 @@ -42,118 +42,119 @@ jsonpointer==2.0 jsonschema==2.6.0 keystoneauth1==3.4.0 -keystonemiddleware==4.21.0 -kombu==4.1.0 +keystonemiddleware==4.17.0 +kombu==4.0.0 linecache2==1.0.0 -lxml==4.1.1 +lxml==3.4.1 Mako==1.0.7 MarkupSafe==1.0 mccabe==0.2.1 mock==2.0.0 monotonic==1.4 -mox3==0.25.0 +mox3==0.20.0 msgpack==0.5.6 munch==2.2.0 -netaddr==0.7.19 +netaddr==0.7.18 netifaces==0.10.6 -openstacksdk==0.12.0 +openstacksdk==0.11.2 os-client-config==1.29.0 os-service-types==1.2.0 os-testr==1.0.0 osc-lib==1.10.0 -oslo.cache==1.29.0 +oslo.cache==1.26.0 oslo.concurrency==3.26.0 oslo.config==5.2.0 -oslo.context==2.20.0 -oslo.db==4.35.0 -oslo.i18n==3.20.0 -oslo.log==3.37.0 -oslo.messaging==5.36.0 -oslo.middleware==3.35.0 -oslo.policy==1.34.0 -oslo.reports==1.27.0 -oslo.serialization==2.25.0 -oslo.service==1.30.0 -oslo.utils==3.36.0 -oslo.versionedobjects==1.32.0 -oslotest==3.3.0 -osprofiler==2.0.0 +oslo.context==2.19.2 +oslo.db==4.27.0 +oslo.i18n==3.15.3 +oslo.log==3.36.0 +oslo.messaging==5.29.0 +oslo.middleware==3.31.0 +oslo.policy==1.30.0 +oslo.reports==1.18.0 +oslo.serialization==2.18.0 +oslo.service==1.24.0 +oslo.utils==3.33.0 +oslo.versionedobjects==1.31.2 +oslotest==3.2.0 +osprofiler==1.4.0 packaging==17.1 paramiko==2.4.1 Paste==2.0.3 -PasteDeploy==1.5.2 -pbr==3.1.1 +PasteDeploy==1.5.0 +pbr==2.0.0 pep8==1.5.7 -pika==0.10.0 pika-pool==0.1.3 +pika==0.10.0 ply==3.11 prettytable==0.7.2 psutil==5.4.3 -psycopg2==2.7.4 +psycopg2==2.6.2 pyasn1==0.4.2 pycadf==2.7.0 pycparser==2.18 pyflakes==0.8.1 pyinotify==0.9.6 -PyMySQL==0.8.0 +PyMySQL==0.7.6 PyNaCl==1.2.1 pyOpenSSL==17.5.0 pyparsing==2.2.0 pyperclip==1.6.0 -python-barbicanclient==4.6.0 -python-ceilometerclient==2.9.0 -python-cinderclient==3.5.0 +python-barbicanclient==4.5.2 +python-ceilometerclient==2.5.0 +python-cinderclient==3.3.0 python-dateutil==2.7.0 -python-designateclient==2.9.0 +python-designateclient==2.7.0 python-editor==1.0.3 -python-glanceclient==2.9.1 -python-heatclient==1.14.0 -python-keystoneclient==3.15.0 -python-magnumclient==2.9.0 -python-manilaclient==1.21.0 +python-glanceclient==2.8.0 +python-heatclient==1.10.0 +python-keystoneclient==3.8.0 +python-magnumclient==2.1.0 +python-manilaclient==1.16.0 python-mimeparse==1.6.0 -python-mistralclient==3.3.0 -python-monascaclient==1.10.0 +python-mistralclient==3.1.0 +python-monascaclient==1.7.0 python-neutronclient==6.7.0 -python-novaclient==10.1.0 -python-octaviaclient==1.4.0 -python-openstackclient==3.14.0 -python-saharaclient==1.5.0 +python-novaclient==9.1.0 +python-octaviaclient==1.3.0 +python-openstackclient==3.12.0 +python-saharaclient==1.4.0 python-subunit==1.2.0 -python-swiftclient==3.5.0 -python-troveclient==2.14.0 -python-zaqarclient==1.9.0 -python-zunclient==1.2.1 -pytz==2018.3 +python-swiftclient==3.2.0 +python-troveclient==2.2.0 +python-zaqarclient==1.0.0 +python-zunclient==1.3.0 +pytz==2013.6 PyYAML==3.12 +qpid-python==0.26;python_version=='2.7' # Apache-2.0 repoze.lru==0.7 -requests==2.18.4 +requests==2.14.2 requestsexceptions==1.4.0 rfc3986==1.1.0 -Routes==2.4.1 +Routes==2.3.1 simplejson==3.13.2 -six==1.11.0 +six==1.10.0 smmap2==2.0.3 -SQLAlchemy==1.2.5 sqlalchemy-migrate==0.11.0 +SQLAlchemy==1.0.10 sqlparse==0.2.4 statsd==3.2.2 stestr==2.0.0 -stevedore==1.28.0 -tempest==18.0.0 +stevedore==1.20.0 +tempest==17.1.0 Tempita==0.5.2 -tenacity==4.9.0 -testrepository==0.0.20 -testresources==2.0.1 -testscenarios==0.5.0 -testtools==2.3.0 +tenacity==4.4.0 +testrepository==0.0.18 +testresources==2.0.0 +testscenarios==0.4 +testtools==2.2.0 traceback2==1.4.0 unittest2==1.1.0 urllib3==1.22 vine==1.1.4 voluptuous==0.11.1 warlock==1.3.0 -WebOb==1.7.4 +WebOb==1.7.1 websocket-client==0.47.0 wrapt==1.10.11 yaql==1.1.3 diff -Nru heat-11.0.0~b1/PKG-INFO heat-11.0.0~b2/PKG-INFO --- heat-11.0.0~b1/PKG-INFO 2018-04-19 19:39:44.000000000 +0000 +++ heat-11.0.0~b2/PKG-INFO 2018-06-07 22:15:42.000000000 +0000 @@ -1,8 +1,8 @@ Metadata-Version: 1.1 Name: heat -Version: 11.0.0.0b1 +Version: 11.0.0.0b2 Summary: OpenStack Orchestration -Home-page: http://docs.openstack.org/developer/heat/ +Home-page: https://docs.openstack.org/heat/latest/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN @@ -10,8 +10,8 @@ Team and repository tags ======================== - .. image:: http://governance.openstack.org/badges/heat.svg - :target: http://governance.openstack.org/reference/tags/index.html + .. image:: https://governance.openstack.org/tc/badges/heat.svg + :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on @@ -33,31 +33,32 @@ git clone https://git.openstack.org/openstack/heat - * Wiki: http://wiki.openstack.org/Heat - * Developer docs: http://docs.openstack.org/heat/latest + * Documentation: https://docs.openstack.org/heat/latest * Template samples: https://git.openstack.org/cgit/openstack/heat-templates * Agents: https://git.openstack.org/cgit/openstack/heat-agents Python client ------------- - https://git.openstack.org/cgit/openstack/python-heatclient + + * Documentation: https://docs.openstack.org/python-heatclient/latest + * Source: https://git.openstack.org/cgit/openstack/python-heatclient References ---------- - * http://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html - * http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/create-stack.html - * http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html - * http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca + * https://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html + * https://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/create-stack.html + * https://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html + * https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca We have integration with ------------------------ * https://git.openstack.org/cgit/openstack/python-novaclient (instance) * https://git.openstack.org/cgit/openstack/python-keystoneclient (auth) - * https://git.openstack.org/cgit/openstack/python-swiftclient (s3) + * https://git.openstack.org/cgit/openstack/python-swiftclient (object storage) * https://git.openstack.org/cgit/openstack/python-neutronclient (networking) * https://git.openstack.org/cgit/openstack/python-ceilometerclient (metering) * https://git.openstack.org/cgit/openstack/python-aodhclient (alarming service) - * https://git.openstack.org/cgit/openstack/python-cinderclient (storage service) + * https://git.openstack.org/cgit/openstack/python-cinderclient (block storage) * https://git.openstack.org/cgit/openstack/python-glanceclient (image service) * https://git.openstack.org/cgit/openstack/python-troveclient (database as a Service) * https://git.openstack.org/cgit/openstack/python-saharaclient (hadoop cluster) @@ -68,6 +69,7 @@ * https://git.openstack.org/cgit/openstack/python-mistralclient (workflow service) * https://git.openstack.org/cgit/openstack/python-zaqarclient (messaging service) * https://git.openstack.org/cgit/openstack/python-monascaclient (monitoring service) + * https://git.openstack.org/cgit/openstack/python-zunclient (container management service) Platform: UNKNOWN diff -Nru heat-11.0.0~b1/playbooks/devstack/functional/run.yaml heat-11.0.0~b2/playbooks/devstack/functional/run.yaml --- heat-11.0.0~b1/playbooks/devstack/functional/run.yaml 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/playbooks/devstack/functional/run.yaml 2018-06-07 22:12:34.000000000 +0000 @@ -32,7 +32,7 @@ services+=,placement-api,placement-client services+=,g-api,g-reg services+=,c-sch,c-api,c-vol,c-bak - services+=,q-svc,q-dhcp,q-meta,q-agt,q-l3,q-trunk + services+=,neutron-api,neutron-dhcp,neutron-metadata-agent,neutron-agent,neutron-l3,neutron-trunk if [ "{{ use_python3 }}" -eq 1 ] ; then export DEVSTACK_GATE_USE_PYTHON3=True @@ -61,12 +61,13 @@ export DEVSTACK_LOCAL_CONFIG+=$'\n'"enable_plugin heat git://git.openstack.org/openstack/heat" - # Enable LBaaS V2 plugin - export PROJECTS="openstack/neutron-lbaas $PROJECTS" - export DEVSTACK_LOCAL_CONFIG+=$'\n'"enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas" + # Enable octavia plugin and services export DEVSTACK_LOCAL_CONFIG+=$'\n'"enable_plugin octavia https://git.openstack.org/openstack/octavia" - # enabling lbaas plugin does not enable the lbaasv2 service, explicitly enable it - services+=,q-lbaasv2,octavia,o-cw,o-hk,o-hm,o-api + export DEVSTACK_LOCAL_CONFIG+=$'\n'"OCTAVIA_AMP_IMAGE_FILE=/tmp/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2" + export DEVSTACK_LOCAL_CONFIG+=$'\n'"OCTAVIA_AMP_IMAGE_SIZE=3" + export DEVSTACK_LOCAL_CONFIG+=$'\n'"OCTAVIA_AMP_IMAGE_NAME=test-only-amphora-x64-haproxy-ubuntu-xenial" + services+=,octavia,o-cw,o-hk,o-hm,o-api + export PROJECTS="openstack/octavia $PROJECTS" export PROJECTS="openstack/barbican $PROJECTS" export PROJECTS="openstack/python-barbicanclient $PROJECTS" export PROJECTS="openstack/barbican-tempest-plugin $PROJECTS" @@ -81,9 +82,6 @@ if [ "{{ branch_override }}" != "default" ] ; then export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE fi - if [ "{{ use_identity_v3_only }}" -eq 1 ] ; then - export DEVSTACK_LOCAL_CONFIG+=$'\n'"ENABLE_IDENTITY_V2=False" - fi if [ "{{ use_apache }}" -eq 0 ] ; then export DEVSTACK_LOCAL_CONFIG+=$'\n'"HEAT_USE_MOD_WSGI=False" fi diff -Nru heat-11.0.0~b1/playbooks/get_amphora_tarball.yaml heat-11.0.0~b2/playbooks/get_amphora_tarball.yaml --- heat-11.0.0~b1/playbooks/get_amphora_tarball.yaml 1970-01-01 00:00:00.000000000 +0000 +++ heat-11.0.0~b2/playbooks/get_amphora_tarball.yaml 2018-06-07 22:12:28.000000000 +0000 @@ -0,0 +1,6 @@ +- hosts: primary + tasks: + - name: Download amphora tarball + get_url: + url: "https://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2" + dest: /tmp/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2 diff -Nru heat-11.0.0~b1/README.rst heat-11.0.0~b2/README.rst --- heat-11.0.0~b1/README.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/README.rst 2018-06-07 22:12:28.000000000 +0000 @@ -2,8 +2,8 @@ Team and repository tags ======================== -.. image:: http://governance.openstack.org/badges/heat.svg - :target: http://governance.openstack.org/reference/tags/index.html +.. image:: https://governance.openstack.org/tc/badges/heat.svg + :target: https://governance.openstack.org/tc/reference/tags/index.html .. Change things from this point on @@ -25,31 +25,32 @@ git clone https://git.openstack.org/openstack/heat -* Wiki: http://wiki.openstack.org/Heat -* Developer docs: http://docs.openstack.org/heat/latest +* Documentation: https://docs.openstack.org/heat/latest * Template samples: https://git.openstack.org/cgit/openstack/heat-templates * Agents: https://git.openstack.org/cgit/openstack/heat-agents Python client ------------- -https://git.openstack.org/cgit/openstack/python-heatclient + +* Documentation: https://docs.openstack.org/python-heatclient/latest +* Source: https://git.openstack.org/cgit/openstack/python-heatclient References ---------- -* http://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html -* http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/create-stack.html -* http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html -* http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca +* https://docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html +* https://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/create-stack.html +* https://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html +* https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca We have integration with ------------------------ * https://git.openstack.org/cgit/openstack/python-novaclient (instance) * https://git.openstack.org/cgit/openstack/python-keystoneclient (auth) -* https://git.openstack.org/cgit/openstack/python-swiftclient (s3) +* https://git.openstack.org/cgit/openstack/python-swiftclient (object storage) * https://git.openstack.org/cgit/openstack/python-neutronclient (networking) * https://git.openstack.org/cgit/openstack/python-ceilometerclient (metering) * https://git.openstack.org/cgit/openstack/python-aodhclient (alarming service) -* https://git.openstack.org/cgit/openstack/python-cinderclient (storage service) +* https://git.openstack.org/cgit/openstack/python-cinderclient (block storage) * https://git.openstack.org/cgit/openstack/python-glanceclient (image service) * https://git.openstack.org/cgit/openstack/python-troveclient (database as a Service) * https://git.openstack.org/cgit/openstack/python-saharaclient (hadoop cluster) @@ -60,3 +61,4 @@ * https://git.openstack.org/cgit/openstack/python-mistralclient (workflow service) * https://git.openstack.org/cgit/openstack/python-zaqarclient (messaging service) * https://git.openstack.org/cgit/openstack/python-monascaclient (monitoring service) +* https://git.openstack.org/cgit/openstack/python-zunclient (container management service) diff -Nru heat-11.0.0~b1/requirements.txt heat-11.0.0~b2/requirements.txt --- heat-11.0.0~b1/requirements.txt 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/requirements.txt 2018-06-07 22:12:28.000000000 +0000 @@ -6,7 +6,7 @@ Babel!=2.4.0,>=2.3.4 # BSD croniter>=0.3.4 # MIT License cryptography>=2.1 # BSD/Apache-2.0 -eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT +eventlet!=0.18.3,!=0.20.1,>=0.18.2 # MIT keystoneauth1>=3.4.0 # Apache-2.0 keystonemiddleware>=4.17.0 # Apache-2.0 lxml!=3.7.0,>=3.4.1 # BSD diff -Nru heat-11.0.0~b1/setup.cfg heat-11.0.0~b2/setup.cfg --- heat-11.0.0~b1/setup.cfg 2018-04-19 19:39:44.000000000 +0000 +++ heat-11.0.0~b2/setup.cfg 2018-06-07 22:15:42.000000000 +0000 @@ -5,7 +5,7 @@ README.rst author = OpenStack author-email = openstack-dev@lists.openstack.org -home-page = http://docs.openstack.org/developer/heat/ +home-page = https://docs.openstack.org/heat/latest/ classifier = Environment :: OpenStack Intended Audience :: Information Technology @@ -177,6 +177,8 @@ heat_template_version.pike = heat.engine.hot.template:HOTemplate20170901 heat_template_version.2018-03-02 = heat.engine.hot.template:HOTemplate20180302 heat_template_version.queens = heat.engine.hot.template:HOTemplate20180302 + heat_template_version.2018-08-31 = heat.engine.hot.template:HOTemplate20180831 + heat_template_version.rocky = heat.engine.hot.template:HOTemplate20180831 [global] setup-hooks = diff -Nru heat-11.0.0~b1/tools/dashboards/heat.dash heat-11.0.0~b2/tools/dashboards/heat.dash --- heat-11.0.0~b1/tools/dashboards/heat.dash 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/tools/dashboards/heat.dash 2018-06-07 22:12:28.000000000 +0000 @@ -18,7 +18,7 @@ query = message:"Blueprint" [section "Needs Feedback (Changes older than 5 days that have not been reviewed by anyone)"] -query = NOT label:Code-Review<=2 age:5d +query = NOT label:Code-Review>=1 NOT label:Code-Review<=-1 age:5d [section "You are a reviewer, but haven't voted in the current revision"] query = reviewer:self diff -Nru heat-11.0.0~b1/tools/README.rst heat-11.0.0~b2/tools/README.rst --- heat-11.0.0~b1/tools/README.rst 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/tools/README.rst 2018-06-07 22:12:28.000000000 +0000 @@ -1,7 +1,7 @@ Files in this directory are general developer tools or examples of how to do certain activities. -If you're running on Fedora, see the instructions at http://docs.openstack.org/developer/heat/getting_started/on_fedora.html +If you're running on Fedora, see the instructions at https://docs.openstack.org/heat/latest/getting_started/on_fedora.html Tools ===== diff -Nru heat-11.0.0~b1/.zuul.yaml heat-11.0.0~b2/.zuul.yaml --- heat-11.0.0~b1/.zuul.yaml 2018-04-19 19:36:19.000000000 +0000 +++ heat-11.0.0~b2/.zuul.yaml 2018-06-07 22:12:34.000000000 +0000 @@ -14,7 +14,7 @@ - openstack/heat - openstack/heat-tempest-plugin - openstack/neutron - - openstack/neutron-lbaas + - openstack/octavia - openstack/oslo.messaging - openstack/python-barbicanclient - openstack/python-heatclient @@ -27,13 +27,13 @@ - ^heat/locale/.*$ - ^heat/tests/.*$ - ^releasenotes/.*$ + pre-run: playbooks/get_amphora_tarball.yaml vars: disable_convergence: 'false' sql: mysql use_amqp1: 0 use_apache: 1 use_python3: 0 - use_identity_v3_only: 0 branch_override: default - job: @@ -57,6 +57,7 @@ - job: name: heat-functional-convg-mysql-lbaasv2-non-apache parent: heat-functional-devstack-base + voting: false vars: use_apache: 0 @@ -67,14 +68,6 @@ use_python3: 1 - job: - name: heat-functional-convg-mysql-lbaasv2-identity-v3-only - parent: heat-functional-devstack-base - voting: false - branches: master - vars: - use_identity_v3_only: 1 - -- job: name: grenade-heat parent: legacy-dsvm-base run: playbooks/devstack/grenade/run.yaml @@ -118,7 +111,6 @@ - heat-functional-convg-mysql-lbaasv2-amqp1 - heat-functional-convg-mysql-lbaasv2-non-apache - heat-functional-convg-mysql-lbaasv2-py35 - - heat-functional-convg-mysql-lbaasv2-identity-v3-only - openstack-tox-lower-constraints gate: jobs: @@ -126,7 +118,6 @@ - grenade-heat-multinode - heat-functional-orig-mysql-lbaasv2 - heat-functional-convg-mysql-lbaasv2 - - heat-functional-convg-mysql-lbaasv2-non-apache - heat-functional-convg-mysql-lbaasv2-py35 - openstack-tox-lower-constraints experimental: