diff -Nru nova-21.1.2/api-guide/source/port_with_resource_request.rst nova-21.2.0/api-guide/source/port_with_resource_request.rst --- nova-21.1.2/api-guide/source/port_with_resource_request.rst 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/api-guide/source/port_with_resource_request.rst 2021-03-15 10:39:50.000000000 +0000 @@ -30,7 +30,8 @@ compute services are upgraded to 20.0.0 (Train) and the ``[upgrade_levels]/compute`` configuration does not prevent the computes from using the latest RPC version. However cross cell resize and cross cell migrate -operations are still not supported with such ports. +operations are still not supported with such ports and Nova will fall back to +same-cell resize if the server has such ports. As of 21.0.0 (Ussuri), nova supports evacuating, live migrating and unshelving servers with neutron ports having resource requests. diff -Nru nova-21.1.2/AUTHORS nova-21.2.0/AUTHORS --- nova-21.1.2/AUTHORS 2021-02-04 11:29:09.000000000 +0000 +++ nova-21.2.0/AUTHORS 2021-03-15 10:40:34.000000000 +0000 @@ -1663,6 +1663,7 @@ rajat29 ramboman ricolin +root rsritesh rtmdk ruichen diff -Nru nova-21.1.2/ChangeLog nova-21.2.0/ChangeLog --- nova-21.1.2/ChangeLog 2021-02-04 11:29:06.000000000 +0000 +++ nova-21.2.0/ChangeLog 2021-03-15 10:40:31.000000000 +0000 @@ -1,15 +1,38 @@ CHANGES ======= +21.2.0 +------ + +* Handle instance = None in \_local\_delete\_cleanup +* Add regression test for bug 1914777 +* Fallback to same-cell resize with qos ports +* Default user\_id when not specified in check\_num\_instances\_quota +* Add regression test for bug 1893284 +* only wait for plugtime events in pre-live-migration +* tools: Allow check-cherry-picks.sh to be disabled by an env var +* Add upgrade check about old computes +* Reproduce bug 1907522 in functional test +* Warn when starting services with older than N-1 computes + 21.1.2 ------ +* Set instance host and drop migration under lock +* Reproduce bug 1896463 in func env +* Disallow CONF.compute.max\_disk\_devices\_to\_attach = 0 +* Use subqueryload() instead of joinedload() for (system\_)metadata +* compute: Lock by instance.uuid lock during swap\_volume * [stable-only] fix lower-constraints and disable qos resize * Omit resource inventories from placement update if zero * Use cell targeted context to query BDMs for metadata * Fix a hacking test * Update pci stat pools based on PCI device changes * Handle disabled CPU features to fix live migration failures +* [doc]: Fix glance image\_metadata link +* Set migrate\_data.vifs only when using multiple port bindings +* libvirt: Only ask tpool.Proxy to autowrap vir\* classes +* add functional regression test for bug #1888395 21.1.1 ------ diff -Nru nova-21.1.2/debian/changelog nova-21.2.0/debian/changelog --- nova-21.1.2/debian/changelog 2021-02-12 23:42:10.000000000 +0000 +++ nova-21.2.0/debian/changelog 2021-04-12 12:14:07.000000000 +0000 @@ -1,3 +1,10 @@ +nova (2:21.2.0-0ubuntu1) focal; urgency=medium + + * New stable point release for OpenStack Ussuri (LP: #1923036). + * d/p/lp1888395-*: Removed after patch landed upstream. + + -- Chris MacNaughton Mon, 12 Apr 2021 12:14:07 +0000 + nova (2:21.1.2-0ubuntu2) focal; urgency=medium * Fix live migration for SDNs that do not support port binding extensions (LP: #1888395) diff -Nru nova-21.1.2/debian/patches/lp1888395-functest.patch nova-21.2.0/debian/patches/lp1888395-functest.patch --- nova-21.1.2/debian/patches/lp1888395-functest.patch 2021-02-12 23:42:10.000000000 +0000 +++ nova-21.2.0/debian/patches/lp1888395-functest.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,175 +0,0 @@ -From bea55a7d45bdc97679cf08c9faec789cfc90de27 Mon Sep 17 00:00:00 2001 -From: Sean Mooney -Date: Fri, 21 Aug 2020 17:17:50 +0000 -Subject: [PATCH 1/2] add functional regression test for bug #1888395 - -This change adds a funcitonal regression test that -assert the broken behavior when trying to live migrate -with a neutron backend that does not support multiple port -bindings. - -Conflicts/Changes: - nova/tests/functional/regressions/test_bug_1888395.py: - - specify api major version to allow block_migration 'auto' - - use TempDir fixture for instances path - nova/tests/unit/virt/libvirt/fake_imagebackend.py: - - include portion of change Ia3d7351c1805d98bcb799ab0375673c7f1cb8848 - which stubs out the is_file_in_instance_path method. That was - included in a feature patch set so just pulling the necessary - bit. - -Change-Id: I470a016d35afe69809321bd67359f466c3feb90a -Partial-Bug: #1888395 -(cherry picked from commit 71bc6fc9b89535679252ffe5a737eddad60e4102) ---- - nova/network/constants.py | 1 + - .../regressions/test_bug_1888395.py | 101 ++++++++++++++++++ - .../unit/virt/libvirt/fake_imagebackend.py | 8 +- - 3 files changed, 109 insertions(+), 1 deletion(-) - create mode 100644 nova/tests/functional/regressions/test_bug_1888395.py - -diff --git a/nova/network/constants.py b/nova/network/constants.py -index cfa5b1b9d2..bb729e08eb 100644 ---- a/nova/network/constants.py -+++ b/nova/network/constants.py -@@ -20,6 +20,7 @@ DNS_INTEGRATION = 'DNS Integration' - MULTI_NET_EXT = 'Multi Provider Network' - FIP_PORT_DETAILS = 'Floating IP Port Details Extension' - SUBSTR_PORT_FILTERING = 'IP address substring filtering' -+PORT_BINDING = 'Port Binding' - PORT_BINDING_EXTENDED = 'Port Bindings Extended' - LIVE_MIGRATION = 'live-migration' - DEFAULT_SECGROUP = 'default' -diff --git a/nova/tests/functional/regressions/test_bug_1888395.py b/nova/tests/functional/regressions/test_bug_1888395.py -new file mode 100644 -index 0000000000..f1626187da ---- /dev/null -+++ b/nova/tests/functional/regressions/test_bug_1888395.py -@@ -0,0 +1,101 @@ -+# Licensed under the Apache License, Version 2.0 (the "License"); you may -+# not use this file except in compliance with the License. You may obtain -+# a copy of the License at -+# -+# http://www.apache.org/licenses/LICENSE-2.0 -+# -+# Unless required by applicable law or agreed to in writing, software -+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -+# License for the specific language governing permissions and limitations -+# under the License. -+ -+import fixtures -+ -+from nova import context -+from nova.network import constants as neutron_constants -+from nova.network import neutron -+from nova.tests.functional.libvirt import base as libvirt_base -+from nova.tests.unit.virt.libvirt import fake_os_brick_connector -+from nova.tests.unit.virt.libvirt import fakelibvirt -+ -+ -+class TestLiveMigrationWithoutMultiplePortBindings( -+ libvirt_base.ServersTestBase): -+ """Regression test for bug 1888395. -+ -+ This regression test asserts that Live migration works when -+ neutron does not support the binding-extended api extension -+ and the legacy single port binding workflow is used. -+ """ -+ -+ ADMIN_API = True -+ api_major_version = 'v2.1' -+ microversion = 'latest' -+ -+ def list_extensions(self, *args, **kwargs): -+ return { -+ 'extensions': [ -+ { -+ # Copied from neutron-lib portbindings.py -+ "updated": "2014-02-03T10:00:00-00:00", -+ "name": neutron_constants.PORT_BINDING, -+ "links": [], -+ "alias": "binding", -+ "description": "Expose port bindings of a virtual port to " -+ "external application" -+ } -+ ] -+ } -+ -+ def setUp(self): -+ self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) -+ super().setUp() -+ self.neutron.list_extensions = self.list_extensions -+ self.neutron_api = neutron.API() -+ # TODO(sean-k-mooney): remove after -+ # I275509eb0e0eb9eaf26fe607b7d9a67e1edc71f8 -+ # has merged. -+ self.useFixture(fixtures.MonkeyPatch( -+ 'nova.virt.libvirt.driver.connector', -+ fake_os_brick_connector)) -+ -+ self.start_computes({ -+ 'start_host': fakelibvirt.HostInfo( -+ cpu_nodes=1, cpu_sockets=1, cpu_cores=4, cpu_threads=2, -+ kB_mem=10740000), -+ 'end_host': fakelibvirt.HostInfo( -+ cpu_nodes=1, cpu_sockets=1, cpu_cores=4, cpu_threads=2, -+ kB_mem=10740000)}) -+ -+ self.ctxt = context.get_admin_context() -+ -+ def test_live_migrate(self): -+ server = self._create_server( -+ host='start_host', -+ networks=[{'port': self.neutron.port_1['id']}]) -+ -+ self.assertFalse( -+ self.neutron_api.supports_port_binding_extension(self.ctxt)) -+ # TODO(sean-k-mooney): extend _live_migrate to support passing a host -+ self.api.post_server_action( -+ server['id'], -+ { -+ 'os-migrateLive': { -+ 'host': 'end_host', -+ 'block_migration': 'auto' -+ } -+ } -+ ) -+ -+ # FIXME(sean-k-mooney): this should succeed but because of bug #188395 -+ # it will fail. -+ # self._wait_for_server_parameter( -+ # server, {'OS-EXT-SRV-ATTR:host': 'end_host', 'status': 'ACTIVE'}) -+ # because of the bug the migration will fail in pre_live_migrate so -+ # the vm should still be active on the start_host -+ self._wait_for_server_parameter( -+ server, {'OS-EXT-SRV-ATTR:host': 'start_host', 'status': 'ACTIVE'}) -+ -+ msg = "NotImplementedError: Cannot load 'vif_type' in the base class" -+ self.assertIn(msg, self.stdlog.logger.output) -diff --git a/nova/tests/unit/virt/libvirt/fake_imagebackend.py b/nova/tests/unit/virt/libvirt/fake_imagebackend.py -index 093fbbbcc0..544cc26404 100644 ---- a/nova/tests/unit/virt/libvirt/fake_imagebackend.py -+++ b/nova/tests/unit/virt/libvirt/fake_imagebackend.py -@@ -184,11 +184,17 @@ class ImageBackendFixture(fixtures.Fixture): - # class. - image_init.SUPPORTS_CLONE = False - -- # Ditto for the 'is_shared_block_storage' function -+ # Ditto for the 'is_shared_block_storage' function and -+ # 'is_file_in_instance_path' - def is_shared_block_storage(): - return False - -+ def is_file_in_instance_path(): -+ return False -+ - setattr(image_init, 'is_shared_block_storage', is_shared_block_storage) -+ setattr(image_init, 'is_file_in_instance_path', -+ is_file_in_instance_path) - - return image_init - --- -2.25.1 - diff -Nru nova-21.1.2/debian/patches/lp1888395-migration.patch nova-21.2.0/debian/patches/lp1888395-migration.patch --- nova-21.1.2/debian/patches/lp1888395-migration.patch 2021-02-12 23:42:10.000000000 +0000 +++ nova-21.2.0/debian/patches/lp1888395-migration.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,338 +0,0 @@ -From afa843c8a7e128489a8245ed7a1b391c022b3305 Mon Sep 17 00:00:00 2001 -From: root -Date: Sat, 18 Jul 2020 00:32:54 -0400 -Subject: [PATCH 2/2] Set migrate_data.vifs only when using multiple port - bindings - -In the rocky cycle nova was enhanced to support the multiple -port binding live migration workflow when neutron supports -the binding-extended API extension. -When the migration_data object was extended to support -multiple port bindings, populating the vifs field was used -as a sentinel to indicate that the new workflow should -be used. - -In the train release -I734cc01dce13f9e75a16639faf890ddb1661b7eb -(SR-IOV Live migration indirect port support) -broke the semantics of the migrate_data object by -unconditionally populating the vifs field - -This change restores the rocky semantics, which are depended -on by several parts of the code base, by only conditionally -populating vifs if neutron supports multiple port bindings. - -Changes to patch: - - unit/virt/libvirt/fakelibvirt.py: Include partial pick from - change Ia3d7351c1805d98bcb799ab0375673c7f1cb8848 to add the - jobStats, complete_job and fail_job to fakelibvirt. The full - change was not cherry-picked as it was part of the numa aware - live migration feature in Victoria. - -Co-Authored-By: Sean Mooney -Change-Id: Ia00277ac8a68a635db85f9e0ce2c6d8df396e0d8 -Closes-Bug: #1888395 -(cherry picked from commit b8f3be6b3c5af91d215b4a0cecb9be098e8d8799) ---- - nova/compute/manager.py | 27 +++++++---- - .../regressions/test_bug_1888395.py | 48 +++++++++++++++---- - nova/tests/unit/compute/test_compute.py | 9 +++- - nova/tests/unit/compute/test_compute_mgr.py | 35 ++++++++++++-- - nova/tests/unit/virt/libvirt/fakelibvirt.py | 13 ++++- - nova/tests/unit/virt/test_virt_drivers.py | 5 ++ - ...ortbinding-semantics-48e9b1fa969cc5e9.yaml | 14 ++++++ - 7 files changed, 127 insertions(+), 24 deletions(-) - create mode 100644 releasenotes/notes/restore-rocky-portbinding-semantics-48e9b1fa969cc5e9.yaml - -diff --git a/nova/compute/manager.py b/nova/compute/manager.py -index c5e98fd67e..63c3523dc3 100644 ---- a/nova/compute/manager.py -+++ b/nova/compute/manager.py -@@ -7694,15 +7694,18 @@ class ComputeManager(manager.Manager): - LOG.info('Destination was ready for NUMA live migration, ' - 'but source is either too old, or is set to an ' - 'older upgrade level.', instance=instance) -- # Create migrate_data vifs -- migrate_data.vifs = \ -- migrate_data_obj.VIFMigrateData.create_skeleton_migrate_vifs( -- instance.get_network_info()) -- # Claim PCI devices for VIFs on destination (if needed) -- port_id_to_pci = self._claim_pci_for_instance_vifs(ctxt, instance) -- # Update migrate VIFs with the newly claimed PCI devices -- self._update_migrate_vifs_profile_with_pci(migrate_data.vifs, -- port_id_to_pci) -+ if self.network_api.supports_port_binding_extension(ctxt): -+ # Create migrate_data vifs -+ migrate_data.vifs = \ -+ migrate_data_obj.\ -+ VIFMigrateData.create_skeleton_migrate_vifs( -+ instance.get_network_info()) -+ # Claim PCI devices for VIFs on destination (if needed) -+ port_id_to_pci = self._claim_pci_for_instance_vifs( -+ ctxt, instance) -+ # Update migrate VIFs with the newly claimed PCI devices -+ self._update_migrate_vifs_profile_with_pci( -+ migrate_data.vifs, port_id_to_pci) - finally: - self.driver.cleanup_live_migration_destination_check(ctxt, - dest_check_data) -@@ -7870,8 +7873,12 @@ class ComputeManager(manager.Manager): - # determine if it should wait for a 'network-vif-plugged' event - # from neutron before starting the actual guest transfer in the - # hypervisor -+ using_multiple_port_bindings = ( -+ 'vifs' in migrate_data and migrate_data.vifs) - migrate_data.wait_for_vif_plugged = ( -- CONF.compute.live_migration_wait_for_vif_plug) -+ CONF.compute.live_migration_wait_for_vif_plug and -+ using_multiple_port_bindings -+ ) - - # NOTE(tr3buchet): setup networks on destination host - self.network_api.setup_networks_on_host(context, instance, -diff --git a/nova/tests/functional/regressions/test_bug_1888395.py b/nova/tests/functional/regressions/test_bug_1888395.py -index f1626187da..214b82d3d1 100644 ---- a/nova/tests/functional/regressions/test_bug_1888395.py -+++ b/nova/tests/functional/regressions/test_bug_1888395.py -@@ -12,6 +12,9 @@ - - import fixtures - -+from lxml import etree -+from urllib import parse as urlparse -+ - from nova import context - from nova.network import constants as neutron_constants - from nova.network import neutron -@@ -69,6 +72,40 @@ class TestLiveMigrationWithoutMultiplePortBindings( - kB_mem=10740000)}) - - self.ctxt = context.get_admin_context() -+ # TODO(sean-k-mooney): remove this when it is part of ServersTestBase -+ self.useFixture(fixtures.MonkeyPatch( -+ 'nova.tests.unit.virt.libvirt.fakelibvirt.Domain.migrateToURI3', -+ self._migrate_stub)) -+ -+ def _migrate_stub(self, domain, destination, params, flags): -+ """Stub out migrateToURI3.""" -+ -+ src_hostname = domain._connection.hostname -+ dst_hostname = urlparse.urlparse(destination).netloc -+ -+ # In a real live migration, libvirt and QEMU on the source and -+ # destination talk it out, resulting in the instance starting to exist -+ # on the destination. Fakelibvirt cannot do that, so we have to -+ # manually create the "incoming" instance on the destination -+ # fakelibvirt. -+ dst = self.computes[dst_hostname] -+ dst.driver._host.get_connection().createXML( -+ params['destination_xml'], -+ 'fake-createXML-doesnt-care-about-flags') -+ -+ src = self.computes[src_hostname] -+ conn = src.driver._host.get_connection() -+ -+ # because migrateToURI3 is spawned in a background thread, this method -+ # does not block the upper nova layers. Because we don't want nova to -+ # think the live migration has finished until this method is done, the -+ # last thing we do is make fakelibvirt's Domain.jobStats() return -+ # VIR_DOMAIN_JOB_COMPLETED. -+ server = etree.fromstring( -+ params['destination_xml'] -+ ).find('./uuid').text -+ dom = conn.lookupByUUIDString(server) -+ dom.complete_job() - - def test_live_migrate(self): - server = self._create_server( -@@ -88,14 +125,7 @@ class TestLiveMigrationWithoutMultiplePortBindings( - } - ) - -- # FIXME(sean-k-mooney): this should succeed but because of bug #188395 -- # it will fail. -- # self._wait_for_server_parameter( -- # server, {'OS-EXT-SRV-ATTR:host': 'end_host', 'status': 'ACTIVE'}) -- # because of the bug the migration will fail in pre_live_migrate so -- # the vm should still be active on the start_host - self._wait_for_server_parameter( -- server, {'OS-EXT-SRV-ATTR:host': 'start_host', 'status': 'ACTIVE'}) -- -+ server, {'OS-EXT-SRV-ATTR:host': 'end_host', 'status': 'ACTIVE'}) - msg = "NotImplementedError: Cannot load 'vif_type' in the base class" -- self.assertIn(msg, self.stdlog.logger.output) -+ self.assertNotIn(msg, self.stdlog.logger.output) -diff --git a/nova/tests/unit/compute/test_compute.py b/nova/tests/unit/compute/test_compute.py -index 7e4e8dd868..1dcaa501c3 100644 ---- a/nova/tests/unit/compute/test_compute.py -+++ b/nova/tests/unit/compute/test_compute.py -@@ -18,6 +18,7 @@ - """Tests for compute service.""" - - import datetime -+import fixtures as std_fixtures - from itertools import chain - import operator - import sys -@@ -6178,13 +6179,19 @@ class ComputeTestCase(BaseTestCase, - return fake_network.fake_get_instance_nw_info(self) - - self.stub_out('nova.network.neutron.API.get_instance_nw_info', stupid) -- -+ self.useFixture( -+ std_fixtures.MonkeyPatch( -+ 'nova.network.neutron.API.supports_port_binding_extension', -+ lambda *args: True)) - # creating instance testdata - instance = self._create_fake_instance_obj({'host': 'dummy'}) - c = context.get_admin_context() - fake_notifier.NOTIFICATIONS = [] - migrate_data = objects.LibvirtLiveMigrateData( - is_shared_instance_path=False) -+ vifs = migrate_data_obj.VIFMigrateData.create_skeleton_migrate_vifs( -+ stupid()) -+ migrate_data.vifs = vifs - mock_pre.return_value = migrate_data - - with mock.patch.object(self.compute.network_api, -diff --git a/nova/tests/unit/compute/test_compute_mgr.py b/nova/tests/unit/compute/test_compute_mgr.py -index 946ae4f5d8..125575d7e9 100644 ---- a/nova/tests/unit/compute/test_compute_mgr.py -+++ b/nova/tests/unit/compute/test_compute_mgr.py -@@ -15,6 +15,7 @@ - import contextlib - import copy - import datetime -+import fixtures as std_fixtures - import time - - from cinderclient import exceptions as cinder_exception -@@ -3246,20 +3247,48 @@ class ComputeManagerUnitTestCase(test.NoDBTestCase, - mock_event.assert_called_once_with( - self.context, 'compute_check_can_live_migrate_destination', - CONF.host, instance.uuid, graceful_exit=False) -+ return result - - def test_check_can_live_migrate_destination_success(self): -+ self.useFixture(std_fixtures.MonkeyPatch( -+ 'nova.network.neutron.API.supports_port_binding_extension', -+ lambda *args: True)) - self._test_check_can_live_migrate_destination() - - def test_check_can_live_migrate_destination_fail(self): -+ self.useFixture(std_fixtures.MonkeyPatch( -+ 'nova.network.neutron.API.supports_port_binding_extension', -+ lambda *args: True)) - self.assertRaises( -- test.TestingException, -- self._test_check_can_live_migrate_destination, -- do_raise=True) -+ test.TestingException, -+ self._test_check_can_live_migrate_destination, -+ do_raise=True) -+ -+ def test_check_can_live_migrate_destination_contins_vifs(self): -+ self.useFixture(std_fixtures.MonkeyPatch( -+ 'nova.network.neutron.API.supports_port_binding_extension', -+ lambda *args: True)) -+ migrate_data = self._test_check_can_live_migrate_destination() -+ self.assertIn('vifs', migrate_data) -+ self.assertIsNotNone(migrate_data.vifs) -+ -+ def test_check_can_live_migrate_destination_no_binding_extended(self): -+ self.useFixture(std_fixtures.MonkeyPatch( -+ 'nova.network.neutron.API.supports_port_binding_extension', -+ lambda *args: False)) -+ migrate_data = self._test_check_can_live_migrate_destination() -+ self.assertNotIn('vifs', migrate_data) - - def test_check_can_live_migrate_destination_src_numa_lm_false(self): -+ self.useFixture(std_fixtures.MonkeyPatch( -+ 'nova.network.neutron.API.supports_port_binding_extension', -+ lambda *args: True)) - self._test_check_can_live_migrate_destination(src_numa_lm=False) - - def test_check_can_live_migrate_destination_src_numa_lm_true(self): -+ self.useFixture(std_fixtures.MonkeyPatch( -+ 'nova.network.neutron.API.supports_port_binding_extension', -+ lambda *args: True)) - self._test_check_can_live_migrate_destination(src_numa_lm=True) - - def test_dest_can_numa_live_migrate(self): -diff --git a/nova/tests/unit/virt/libvirt/fakelibvirt.py b/nova/tests/unit/virt/libvirt/fakelibvirt.py -index 46d3361128..2a4f0775ea 100644 ---- a/nova/tests/unit/virt/libvirt/fakelibvirt.py -+++ b/nova/tests/unit/virt/libvirt/fakelibvirt.py -@@ -823,6 +823,7 @@ class Domain(object): - self._has_saved_state = False - self._snapshots = {} - self._id = self._connection._id_counter -+ self._job_type = VIR_DOMAIN_JOB_UNBOUNDED - - def _parse_definition(self, xml): - try: -@@ -1252,7 +1253,17 @@ class Domain(object): - return [0] * 12 - - def jobStats(self, flags=0): -- return {} -+ # NOTE(artom) By returning VIR_DOMAIN_JOB_UNBOUNDED, we're pretending a -+ # job is constantly running. Tests are expected to call the -+ # complete_job or fail_job methods when they're ready for jobs (read: -+ # live migrations) to "complete". -+ return {'type': self._job_type} -+ -+ def complete_job(self): -+ self._job_type = VIR_DOMAIN_JOB_COMPLETED -+ -+ def fail_job(self): -+ self._job_type = VIR_DOMAIN_JOB_FAILED - - def injectNMI(self, flags=0): - return 0 -diff --git a/nova/tests/unit/virt/test_virt_drivers.py b/nova/tests/unit/virt/test_virt_drivers.py -index 52a0a310fc..84ce1e827a 100644 ---- a/nova/tests/unit/virt/test_virt_drivers.py -+++ b/nova/tests/unit/virt/test_virt_drivers.py -@@ -39,6 +39,7 @@ from nova.tests import fixtures as nova_fixtures - from nova.tests.unit import fake_block_device - from nova.tests.unit.image import fake as fake_image - from nova.tests.unit import utils as test_utils -+from nova.tests.unit.virt.libvirt import fakelibvirt - from nova.virt import block_device as driver_block_device - from nova.virt import event as virtevent - from nova.virt import fake -@@ -593,6 +594,10 @@ class _VirtDriverTestCase(_FakeDriverBackendTestCase): - self.assertIn('username', console_pool) - self.assertIn('password', console_pool) - -+ @mock.patch( -+ 'nova.tests.unit.virt.libvirt.fakelibvirt.Domain.jobStats', -+ new=mock.Mock(return_value={ -+ 'type': fakelibvirt.VIR_DOMAIN_JOB_COMPLETED})) - def test_live_migration(self): - instance_ref, network_info = self._get_running_instance() - fake_context = context.RequestContext('fake', 'fake') -diff --git a/releasenotes/notes/restore-rocky-portbinding-semantics-48e9b1fa969cc5e9.yaml b/releasenotes/notes/restore-rocky-portbinding-semantics-48e9b1fa969cc5e9.yaml -new file mode 100644 -index 0000000000..dc33e3c61d ---- /dev/null -+++ b/releasenotes/notes/restore-rocky-portbinding-semantics-48e9b1fa969cc5e9.yaml -@@ -0,0 +1,14 @@ -+--- -+fixes: -+ - | -+ In the Rocky (18.0.0) release support was added to nova to use neutron's -+ multiple port binding feature when the binding-extended API extension -+ is available. In the Train (20.0.0) release the SR-IOV live migration -+ feature broke the semantics of the vifs field in the ``migration_data`` -+ object that signals if the new multiple port binding workflow should -+ be used by always populating it even when the ``binding-extended`` API -+ extension is not present. This broke live migration for any deployment -+ that did not support the optional ``binding-extended`` API extension. -+ The Rocky behavior has now been restored enabling live migration -+ using the single port binding workflow when multiple port bindings -+ are not available. --- -2.25.1 - diff -Nru nova-21.1.2/debian/patches/series nova-21.2.0/debian/patches/series --- nova-21.1.2/debian/patches/series 2021-02-12 23:42:10.000000000 +0000 +++ nova-21.2.0/debian/patches/series 2021-04-12 12:14:07.000000000 +0000 @@ -3,5 +3,3 @@ drop-sphinx-feature-classification.patch arm-console-patch.patch add-mysql8-compatibility.patch -lp1888395-functest.patch -lp1888395-migration.patch diff -Nru nova-21.1.2/doc/source/admin/configuration/cross-cell-resize.rst nova-21.2.0/doc/source/admin/configuration/cross-cell-resize.rst --- nova-21.1.2/doc/source/admin/configuration/cross-cell-resize.rst 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/doc/source/admin/configuration/cross-cell-resize.rst 2021-03-15 10:39:49.000000000 +0000 @@ -237,7 +237,8 @@ * Instances with ports attached that have :doc:`bandwidth-aware ` resource - provider allocations. + provider allocations. Nova falls back to same-cell resize if the server has + such ports. * Rescheduling to alternative hosts within the same target cell in case the primary selected host fails the ``prep_snapshot_based_resize_at_dest`` call. diff -Nru nova-21.1.2/doc/source/admin/configuration/hypervisor-xen-libvirt.rst nova-21.2.0/doc/source/admin/configuration/hypervisor-xen-libvirt.rst --- nova-21.1.2/doc/source/admin/configuration/hypervisor-xen-libvirt.rst 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/doc/source/admin/configuration/hypervisor-xen-libvirt.rst 2021-03-15 10:39:50.000000000 +0000 @@ -161,7 +161,7 @@ $ openstack image set --property hypervisor_type=xen vm_mode=hvm IMAGE For more information on image metadata, refer to the `OpenStack Virtual - Image Guide `__. + Image Guide `__. #. **Libguestfs file injection**: OpenStack compute nodes can use `libguestfs `_ to inject files into an instance's image prior to diff -Nru nova-21.1.2/doc/source/admin/cpu-topologies.rst nova-21.2.0/doc/source/admin/cpu-topologies.rst --- nova-21.1.2/doc/source/admin/cpu-topologies.rst 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/doc/source/admin/cpu-topologies.rst 2021-03-15 10:39:50.000000000 +0000 @@ -639,6 +639,6 @@ instances with a NUMA topology. .. Links -.. _`Image metadata`: https://docs.openstack.org/image-guide/image-metadata.html +.. _`Image metadata`: https://docs.openstack.org/image-guide/introduction.html#image-metadata .. _`discussion`: http://lists.openstack.org/pipermail/openstack-dev/2016-March/090367.html .. _`MTTCG project`: http://wiki.qemu.org/Features/tcg-multithread diff -Nru nova-21.1.2/doc/source/admin/huge-pages.rst nova-21.2.0/doc/source/admin/huge-pages.rst --- nova-21.1.2/doc/source/admin/huge-pages.rst 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/doc/source/admin/huge-pages.rst 2021-03-15 10:39:49.000000000 +0000 @@ -239,4 +239,4 @@ .. Links .. _`Linux THP guide`: https://www.kernel.org/doc/Documentation/vm/transhuge.txt .. _`Linux hugetlbfs guide`: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt -.. _`Image metadata`: https://docs.openstack.org/image-guide/image-metadata.html +.. _`Image metadata`: https://docs.openstack.org/image-guide/introduction.html#image-metadata diff -Nru nova-21.1.2/doc/source/cli/nova-status.rst nova-21.2.0/doc/source/cli/nova-status.rst --- nova-21.1.2/doc/source/cli/nova-status.rst 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/doc/source/cli/nova-status.rst 2021-03-15 10:39:50.000000000 +0000 @@ -147,6 +147,8 @@ * Checks for the policy files are not automatically overwritten with new defaults. + * Checks for computes older than the previous major release. This check was + backported from 23.0.0 (Wallaby). See Also ======== diff -Nru nova-21.1.2/doc/source/contributor/ptl-guide.rst nova-21.2.0/doc/source/contributor/ptl-guide.rst --- nova-21.1.2/doc/source/contributor/ptl-guide.rst 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/doc/source/contributor/ptl-guide.rst 2021-03-15 10:39:50.000000000 +0000 @@ -255,6 +255,9 @@ * Example: https://review.opendev.org/543580 + * Bump the oldest supported compute service version + * https://review.opendev.org/#/c/738482/ + * Create the launchpad series for the next cycle * Set the development focus of the project to the new cycle series diff -Nru nova-21.1.2/doc/source/install/verify.rst nova-21.2.0/doc/source/install/verify.rst --- nova-21.1.2/doc/source/install/verify.rst 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/doc/source/install/verify.rst 2021-03-15 10:39:50.000000000 +0000 @@ -127,3 +127,7 @@ | Result: Success | | Details: None | +--------------------------------------------------------------------+ + | Check: Older than N-1 computes | + | Result: Success | + | Details: None | + +--------------------------------------------------------------------+ diff -Nru nova-21.1.2/nova/api/openstack/wsgi_app.py nova-21.2.0/nova/api/openstack/wsgi_app.py --- nova-21.1.2/nova/api/openstack/wsgi_app.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/api/openstack/wsgi_app.py 2021-03-15 10:39:50.000000000 +0000 @@ -17,12 +17,14 @@ from oslo_log import log as logging from oslo_service import _options as service_opts from paste import deploy +import six from nova import config from nova import context from nova import exception from nova import objects from nova import service +from nova import utils CONF = cfg.CONF @@ -40,6 +42,11 @@ def _setup_service(host, name): + try: + utils.raise_if_old_compute() + except exception.TooOldComputeService as e: + logging.getLogger(__name__).warning(six.text_type(e)) + binary = name if name.startswith('nova-') else "nova-%s" % name ctxt = context.get_admin_context() diff -Nru nova-21.1.2/nova/cmd/status.py nova-21.2.0/nova/cmd/status.py --- nova-21.1.2/nova/cmd/status.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/cmd/status.py 2021-03-15 10:39:50.000000000 +0000 @@ -411,6 +411,17 @@ policy.reset() return status + def _check_old_computes(self): + # warn if there are computes in the system older than the previous + # major release + try: + utils.raise_if_old_compute() + except exception.TooOldComputeService as e: + return upgradecheck.Result( + upgradecheck.Code.WARNING, six.text_type(e)) + + return upgradecheck.Result(upgradecheck.Code.SUCCESS) + # The format of the check functions is to return an upgradecheck.Result # object with the appropriate upgradecheck.Code and details set. If the # check hits warnings or failures then those should be stored in the @@ -429,6 +440,8 @@ (_('Cinder API'), _check_cinder), # Added in Ussuri (_('Policy Scope-based Defaults'), _check_policy), + # Backported from Wallaby + (_('Older than N-1 computes'), _check_old_computes) ) diff -Nru nova-21.1.2/nova/compute/api.py nova-21.2.0/nova/compute/api.py --- nova-21.1.2/nova/compute/api.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/compute/api.py 2021-03-15 10:39:50.000000000 +0000 @@ -2116,22 +2116,22 @@ return True return False - def _local_delete_cleanup(self, context, instance): + def _local_delete_cleanup(self, context, instance_uuid): # NOTE(aarents) Ensure instance allocation is cleared and instance # mapping queued as deleted before _delete() return try: self.placementclient.delete_allocation_for_instance( - context, instance.uuid) + context, instance_uuid) except exception.AllocationDeleteFailed: LOG.info("Allocation delete failed during local delete cleanup.", - instance=instance) + instance_uuid=instance_uuid) try: - self._update_queued_for_deletion(context, instance, True) + self._update_queued_for_deletion(context, instance_uuid, True) except exception.InstanceMappingNotFound: LOG.info("Instance Mapping does not exist while attempting " "local delete cleanup.", - instance=instance) + instance_uuid=instance_uuid) def _attempt_delete_of_buildrequest(self, context, instance): # If there is a BuildRequest then the instance may not have been @@ -2168,7 +2168,7 @@ if not instance.host and not may_have_ports_or_volumes: try: if self._delete_while_booting(context, instance): - self._local_delete_cleanup(context, instance) + self._local_delete_cleanup(context, instance.uuid) return # If instance.host was not set it's possible that the Instance # object here was pulled from a BuildRequest object and is not @@ -2177,6 +2177,11 @@ # properly. A lookup is attempted which will either return a # full Instance or None if not found. If not found then it's # acceptable to skip the rest of the delete processing. + + # Save a copy of the instance UUID early, in case + # _lookup_instance returns instance = None, to pass to + # _local_delete_cleanup if needed. + instance_uuid = instance.uuid cell, instance = self._lookup_instance(context, instance.uuid) if cell and instance: try: @@ -2187,11 +2192,11 @@ except exception.InstanceNotFound: pass # The instance was deleted or is already gone. - self._local_delete_cleanup(context, instance) + self._local_delete_cleanup(context, instance.uuid) return if not instance: # Instance is already deleted. - self._local_delete_cleanup(context, instance) + self._local_delete_cleanup(context, instance_uuid) return except exception.ObjectActionError: # NOTE(melwitt): This means the instance.host changed @@ -2204,7 +2209,7 @@ cell, instance = self._lookup_instance(context, instance.uuid) if not instance: # Instance is already deleted - self._local_delete_cleanup(context, instance) + self._local_delete_cleanup(context, instance_uuid) return bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( @@ -2248,7 +2253,7 @@ 'field, its vm_state is %(state)s.', {'state': instance.vm_state}, instance=instance) - self._local_delete_cleanup(context, instance) + self._local_delete_cleanup(context, instance.uuid) return except exception.ObjectActionError as ex: # The instance's host likely changed under us as @@ -2433,7 +2438,7 @@ instance.destroy() @staticmethod - def _update_queued_for_deletion(context, instance, qfd): + def _update_queued_for_deletion(context, instance_uuid, qfd): # NOTE(tssurya): We query the instance_mapping record of this instance # and update the queued_for_delete flag to True (or False according to # the state of the instance). This just means that the instance is @@ -2442,7 +2447,7 @@ # value could be stale which is fine, considering its use is only # during down cell (desperate) situation. im = objects.InstanceMapping.get_by_instance_uuid(context, - instance.uuid) + instance_uuid) im.queued_for_delete = qfd im.save() @@ -2454,7 +2459,7 @@ instance.save() else: self.compute_rpcapi.terminate_instance(context, instance, bdms) - self._update_queued_for_deletion(context, instance, True) + self._update_queued_for_deletion(context, instance.uuid, True) def _do_soft_delete(self, context, instance, bdms, local=False): if local: @@ -2464,7 +2469,7 @@ instance.save() else: self.compute_rpcapi.soft_delete_instance(context, instance) - self._update_queued_for_deletion(context, instance, True) + self._update_queued_for_deletion(context, instance.uuid, True) # NOTE(maoy): we allow delete to be called no matter what vm_state says. @check_instance_lock @@ -2517,7 +2522,7 @@ instance.task_state = None instance.deleted_at = None instance.save(expected_task_state=[None]) - self._update_queued_for_deletion(context, instance, False) + self._update_queued_for_deletion(context, instance.uuid, False) @check_instance_lock @check_instance_state(task_state=None, @@ -3804,8 +3809,7 @@ migration, migration.source_compute) - @staticmethod - def _allow_cross_cell_resize(context, instance): + def _allow_cross_cell_resize(self, context, instance): """Determine if the request can perform a cross-cell resize on this instance. @@ -3835,7 +3839,17 @@ 'version in the deployment %s is less than %s so ' 'cross-cell resize is not allowed at this time.', min_compute_version, MIN_COMPUTE_CROSS_CELL_RESIZE) - allowed = False + return False + + if self.network_api.get_requested_resource_for_instance( + context, instance.uuid): + LOG.info( + 'Request is allowed by policy to perform cross-cell ' + 'resize but the instance has ports with resource request ' + 'and cross-cell resize is not supported with such ports.', + instance=instance) + return False + return allowed @staticmethod diff -Nru nova-21.1.2/nova/compute/manager.py nova-21.2.0/nova/compute/manager.py --- nova-21.1.2/nova/compute/manager.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/compute/manager.py 2021-03-15 10:39:50.000000000 +0000 @@ -1394,6 +1394,13 @@ eventlet.semaphore.BoundedSemaphore( CONF.compute.max_concurrent_disk_ops) + if CONF.compute.max_disk_devices_to_attach == 0: + msg = _('[compute]max_disk_devices_to_attach has been set to 0, ' + 'which will prevent instances from being able to boot. ' + 'Set -1 for unlimited or set >= 1 to limit the maximum ' + 'number of disk devices.') + raise exception.InvalidConfiguration(msg) + self.driver.init_host(host=self.host) context = nova.context.get_admin_context() instances = objects.InstanceList.get_by_host( @@ -3441,19 +3448,11 @@ self._notify_instance_rebuild_error(context, instance, e, bdms) raise else: - instance.apply_migration_context() - # NOTE (ndipanov): This save will now update the host and node - # attributes making sure that next RT pass is consistent since - # it will be based on the instance and not the migration DB - # entry. - instance.host = self.host - instance.node = scheduled_node - instance.save() - instance.drop_migration_context() - - # NOTE (ndipanov): Mark the migration as done only after we - # mark the instance as belonging to this host. - self._set_migration_status(migration, 'done') + # NOTE(gibi): Let the resource tracker set the instance + # host and drop the migration context as we need to hold the + # COMPUTE_RESOURCE_SEMAPHORE to avoid the race with + # _update_available_resources. See bug 1896463. + self.rt.finish_evacuation(instance, scheduled_node, migration) def _do_rebuild_instance_with_claim( self, context, instance, orig_image_ref, image_meta, @@ -7321,9 +7320,33 @@ @wrap_instance_fault def swap_volume(self, context, old_volume_id, new_volume_id, instance, new_attachment_id): - """Swap volume for an instance.""" - context = context.elevated() + """Replace the old volume with the new volume within the active server + :param context: User request context + :param old_volume_id: Original volume id + :param new_volume_id: New volume id being swapped to + :param instance: Instance with original_volume_id attached + :param new_attachment_id: ID of the new attachment for new_volume_id + """ + @utils.synchronized(instance.uuid) + def _do_locked_swap_volume(context, old_volume_id, new_volume_id, + instance, new_attachment_id): + self._do_swap_volume(context, old_volume_id, new_volume_id, + instance, new_attachment_id) + _do_locked_swap_volume(context, old_volume_id, new_volume_id, instance, + new_attachment_id) + + def _do_swap_volume(self, context, old_volume_id, new_volume_id, + instance, new_attachment_id): + """Replace the old volume with the new volume within the active server + + :param context: User request context + :param old_volume_id: Original volume id + :param new_volume_id: New volume id being swapped to + :param instance: Instance with original_volume_id attached + :param new_attachment_id: ID of the new attachment for new_volume_id + """ + context = context.elevated() compute_utils.notify_about_volume_swap( context, instance, self.host, fields.NotificationPhase.START, @@ -7694,15 +7717,18 @@ LOG.info('Destination was ready for NUMA live migration, ' 'but source is either too old, or is set to an ' 'older upgrade level.', instance=instance) - # Create migrate_data vifs - migrate_data.vifs = \ - migrate_data_obj.VIFMigrateData.create_skeleton_migrate_vifs( - instance.get_network_info()) - # Claim PCI devices for VIFs on destination (if needed) - port_id_to_pci = self._claim_pci_for_instance_vifs(ctxt, instance) - # Update migrate VIFs with the newly claimed PCI devices - self._update_migrate_vifs_profile_with_pci(migrate_data.vifs, - port_id_to_pci) + if self.network_api.supports_port_binding_extension(ctxt): + # Create migrate_data vifs + migrate_data.vifs = \ + migrate_data_obj.\ + VIFMigrateData.create_skeleton_migrate_vifs( + instance.get_network_info()) + # Claim PCI devices for VIFs on destination (if needed) + port_id_to_pci = self._claim_pci_for_instance_vifs( + ctxt, instance) + # Update migrate VIFs with the newly claimed PCI devices + self._update_migrate_vifs_profile_with_pci( + migrate_data.vifs, port_id_to_pci) finally: self.driver.cleanup_live_migration_destination_check(ctxt, dest_check_data) @@ -7870,8 +7896,12 @@ # determine if it should wait for a 'network-vif-plugged' event # from neutron before starting the actual guest transfer in the # hypervisor + using_multiple_port_bindings = ( + 'vifs' in migrate_data and migrate_data.vifs) migrate_data.wait_for_vif_plugged = ( - CONF.compute.live_migration_wait_for_vif_plug) + CONF.compute.live_migration_wait_for_vif_plug and + using_multiple_port_bindings + ) # NOTE(tr3buchet): setup networks on destination host self.network_api.setup_networks_on_host(context, instance, @@ -7923,8 +7953,8 @@ # We don't generate events if CONF.vif_plugging_timeout=0 # meaning that the operator disabled using them. if CONF.vif_plugging_timeout: - return [('network-vif-plugged', vif['id']) - for vif in instance.get_network_info()] + return (instance.get_network_info() + .get_live_migration_plug_time_events()) else: return [] diff -Nru nova-21.1.2/nova/compute/resource_tracker.py nova-21.2.0/nova/compute/resource_tracker.py --- nova-21.1.2/nova/compute/resource_tracker.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/compute/resource_tracker.py 2021-03-15 10:39:50.000000000 +0000 @@ -1786,3 +1786,21 @@ """ self.pci_tracker.free_instance_claims(context, instance) self.pci_tracker.save(context) + + @utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE, fair=True) + def finish_evacuation(self, instance, node, migration): + instance.apply_migration_context() + # NOTE (ndipanov): This save will now update the host and node + # attributes making sure that next RT pass is consistent since + # it will be based on the instance and not the migration DB + # entry. + instance.host = self.host + instance.node = node + instance.save() + instance.drop_migration_context() + + # NOTE (ndipanov): Mark the migration as done only after we + # mark the instance as belonging to this host. + if migration: + migration.status = 'done' + migration.save() diff -Nru nova-21.1.2/nova/compute/utils.py nova-21.2.0/nova/compute/utils.py --- nova-21.1.2/nova/compute/utils.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/compute/utils.py 2021-03-15 10:39:50.000000000 +0000 @@ -1112,9 +1112,18 @@ max_count, project_id=None, user_id=None, orig_num_req=None): """Enforce quota limits on number of instances created.""" - # project_id is used for the TooManyInstances error message + # project_id is also used for the TooManyInstances error message if project_id is None: project_id = context.project_id + if user_id is None: + user_id = context.user_id + # Check whether we need to count resources per-user and check a per-user + # quota limit. If we have no per-user quota limit defined for a + # project/user, we can avoid wasteful resource counting. + user_quotas = objects.Quotas.get_all_by_project_and_user( + context, project_id, user_id) + if not any(r in user_quotas for r in ['instances', 'cores', 'ram']): + user_id = None # Determine requested cores and ram req_cores = max_count * instance_type.vcpus req_ram = max_count * instance_type.memory_mb diff -Nru nova-21.1.2/nova/conf/compute.py nova-21.2.0/nova/conf/compute.py --- nova-21.1.2/nova/conf/compute.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/conf/compute.py 2021-03-15 10:39:50.000000000 +0000 @@ -949,10 +949,16 @@ The configured maximum is not enforced on shelved offloaded servers, as they have no compute host. +.. warning:: If this option is set to 0, the ``nova-compute`` service will fail + to start, as 0 disk devices is an invalid configuration that would + prevent instances from being able to boot. + Possible values: * -1 means unlimited -* Any integer >= 0 represents the maximum allowed +* Any integer >= 1 represents the maximum allowed. A value of 0 will cause the + ``nova-compute`` service to fail to start, as 0 disk devices is an invalid + configuration that would prevent instances from being able to boot. """), ] diff -Nru nova-21.1.2/nova/db/sqlalchemy/api.py nova-21.2.0/nova/db/sqlalchemy/api.py --- nova-21.1.2/nova/db/sqlalchemy/api.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/db/sqlalchemy/api.py 2021-03-15 10:39:50.000000000 +0000 @@ -47,6 +47,7 @@ from sqlalchemy.orm import aliased from sqlalchemy.orm import joinedload from sqlalchemy.orm import noload +from sqlalchemy.orm import subqueryload from sqlalchemy.orm import undefer from sqlalchemy.schema import Table from sqlalchemy import sql @@ -1266,13 +1267,27 @@ continue if 'extra.' in column: query = query.options(undefer(column)) + elif column in ['metadata', 'system_metadata']: + # NOTE(melwitt): We use subqueryload() instead of joinedload() for + # metadata and system_metadata because of the one-to-many + # relationship of the data. Directly joining these columns can + # result in a large number of additional rows being queried if an + # instance has a large number of (system_)metadata items, resulting + # in a large data transfer. Instead, the subqueryload() will + # perform additional queries to obtain metadata and system_metadata + # for the instance. + query = query.options(subqueryload(column)) else: query = query.options(joinedload(column)) # NOTE(alaski) Stop lazy loading of columns not needed. for col in ['metadata', 'system_metadata']: if col not in columns_to_join: query = query.options(noload(col)) - return query + # NOTE(melwitt): We need to use order_by() so that the + # additional queries emitted by subqueryload() include the same ordering as + # used by the parent query. + # https://docs.sqlalchemy.org/en/13/orm/loading_relationships.html#the-importance-of-ordering + return query.order_by(models.Instance.id) def _instances_fill_metadata(context, instances, manual_joins=None): diff -Nru nova-21.1.2/nova/exception.py nova-21.2.0/nova/exception.py --- nova-21.1.2/nova/exception.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/exception.py 2021-03-15 10:39:50.000000000 +0000 @@ -553,6 +553,14 @@ "Unable to continue.") +class TooOldComputeService(Invalid): + msg_fmt = _("Current Nova version does not support computes older than " + "%(oldest_supported_version)s but the minimum compute service " + "level in your %(scope)s is %(min_service_level)d and the " + "oldest supported service level is " + "%(oldest_supported_service)d.") + + class DestinationDiskExists(Invalid): msg_fmt = _("The supplied disk path (%(path)s) already exists, " "it is expected not to exist.") diff -Nru nova-21.1.2/nova/network/constants.py nova-21.2.0/nova/network/constants.py --- nova-21.1.2/nova/network/constants.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/network/constants.py 2021-03-15 10:39:50.000000000 +0000 @@ -20,6 +20,7 @@ MULTI_NET_EXT = 'Multi Provider Network' FIP_PORT_DETAILS = 'Floating IP Port Details Extension' SUBSTR_PORT_FILTERING = 'IP address substring filtering' +PORT_BINDING = 'Port Binding' PORT_BINDING_EXTENDED = 'Port Bindings Extended' LIVE_MIGRATION = 'live-migration' DEFAULT_SECGROUP = 'default' diff -Nru nova-21.1.2/nova/network/model.py nova-21.2.0/nova/network/model.py --- nova-21.1.2/nova/network/model.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/network/model.py 2021-03-15 10:39:50.000000000 +0000 @@ -469,6 +469,14 @@ return (self.is_hybrid_plug_enabled() and not migration.is_same_host()) + @property + def has_live_migration_plug_time_event(self): + """Returns whether this VIF's network-vif-plugged external event will + be sent by Neutron at "plugtime" - in other words, as soon as neutron + completes configuring the network backend. + """ + return self.is_hybrid_plug_enabled() + def is_hybrid_plug_enabled(self): return self['details'].get(VIF_DETAILS_OVS_HYBRID_PLUG, False) @@ -530,15 +538,22 @@ return jsonutils.dumps(self) def get_bind_time_events(self, migration): - """Returns whether any of our VIFs have "bind-time" events. See - has_bind_time_event() docstring for more details. + """Returns a list of external events for any VIFs that have + "bind-time" events during cold migration. """ return [('network-vif-plugged', vif['id']) for vif in self if vif.has_bind_time_event(migration)] + def get_live_migration_plug_time_events(self): + """Returns a list of external events for any VIFs that have + "plug-time" events during live migration. + """ + return [('network-vif-plugged', vif['id']) + for vif in self if vif.has_live_migration_plug_time_event] + def get_plug_time_events(self, migration): - """Complementary to get_bind_time_events(), any event that does not - fall in that category is a plug-time event. + """Returns a list of external events for any VIFs that have + "plug-time" events during cold migration. """ return [('network-vif-plugged', vif['id']) for vif in self if not vif.has_bind_time_event(migration)] diff -Nru nova-21.1.2/nova/objects/service.py nova-21.2.0/nova/objects/service.py --- nova-21.1.2/nova/objects/service.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/objects/service.py 2021-03-15 10:39:50.000000000 +0000 @@ -187,6 +187,13 @@ {'compute_rpc': '5.11'}, ) +# This is used to raise an error at service startup if older than N-1 computes +# are detected. Update this at the beginning of every release cycle +OLDEST_SUPPORTED_SERVICE_VERSION = 'Ussuri' +SERVICE_VERSION_ALIASES = { + 'Ussuri': 41 +} + # TODO(berrange): Remove NovaObjectDictCompat @base.NovaObjectRegistry.register diff -Nru nova-21.1.2/nova/service.py nova-21.2.0/nova/service.py --- nova-21.1.2/nova/service.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/service.py 2021-03-15 10:39:50.000000000 +0000 @@ -26,6 +26,7 @@ import oslo_messaging as messaging from oslo_service import service from oslo_utils import importutils +import six from nova.api import wsgi as api_wsgi from nova import baserpc @@ -268,6 +269,17 @@ periodic_fuzzy_delay=periodic_fuzzy_delay, periodic_interval_max=periodic_interval_max) + # NOTE(gibi): This have to be after the service object creation as + # that is the point where we can safely use the RPC to the conductor. + # E.g. the Service.__init__ actually waits for the conductor to start + # up before it allows the service to be created. The + # raise_if_old_compute() depends on the RPC to be up and does not + # implement its own retry mechanism to connect to the conductor. + try: + utils.raise_if_old_compute() + except exception.TooOldComputeService as e: + LOG.warning(six.text_type(e)) + return service_obj def kill(self): diff -Nru nova-21.1.2/nova/tests/functional/api/client.py nova-21.2.0/nova/tests/functional/api/client.py --- nova-21.1.2/nova/tests/functional/api/client.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/functional/api/client.py 2021-03-15 10:39:50.000000000 +0000 @@ -539,16 +539,20 @@ def get_server_diagnostics(self, server_id): return self.api_get('/servers/%s/diagnostics' % server_id).body - def get_quota_detail(self, project_id=None): + def get_quota_detail(self, project_id=None, user_id=None): if not project_id: project_id = self.project_id - return self.api_get( - '/os-quota-sets/%s/detail' % project_id).body['quota_set'] + url = '/os-quota-sets/%s/detail' + if user_id: + url += '?user_id=%s' % user_id + return self.api_get(url % project_id).body['quota_set'] - def update_quota(self, quotas, project_id=None): + def update_quota(self, quotas, project_id=None, user_id=None): if not project_id: project_id = self.project_id + url = '/os-quota-sets/%s' + if user_id: + url += '?user_id=%s' % user_id body = {'quota_set': {}} body['quota_set'].update(quotas) - return self.api_put( - '/os-quota-sets/%s' % project_id, body).body['quota_set'] + return self.api_put(url % project_id, body).body['quota_set'] diff -Nru nova-21.1.2/nova/tests/functional/regressions/test_bug_1888395.py nova-21.2.0/nova/tests/functional/regressions/test_bug_1888395.py --- nova-21.1.2/nova/tests/functional/regressions/test_bug_1888395.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-21.2.0/nova/tests/functional/regressions/test_bug_1888395.py 2021-03-15 10:39:50.000000000 +0000 @@ -0,0 +1,131 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import fixtures + +from lxml import etree +from urllib import parse as urlparse + +from nova import context +from nova.network import constants as neutron_constants +from nova.network import neutron +from nova.tests.functional.libvirt import base as libvirt_base +from nova.tests.unit.virt.libvirt import fake_os_brick_connector +from nova.tests.unit.virt.libvirt import fakelibvirt + + +class TestLiveMigrationWithoutMultiplePortBindings( + libvirt_base.ServersTestBase): + """Regression test for bug 1888395. + + This regression test asserts that Live migration works when + neutron does not support the binding-extended api extension + and the legacy single port binding workflow is used. + """ + + ADMIN_API = True + api_major_version = 'v2.1' + microversion = 'latest' + + def list_extensions(self, *args, **kwargs): + return { + 'extensions': [ + { + # Copied from neutron-lib portbindings.py + "updated": "2014-02-03T10:00:00-00:00", + "name": neutron_constants.PORT_BINDING, + "links": [], + "alias": "binding", + "description": "Expose port bindings of a virtual port to " + "external application" + } + ] + } + + def setUp(self): + self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) + super().setUp() + self.neutron.list_extensions = self.list_extensions + self.neutron_api = neutron.API() + # TODO(sean-k-mooney): remove after + # I275509eb0e0eb9eaf26fe607b7d9a67e1edc71f8 + # has merged. + self.useFixture(fixtures.MonkeyPatch( + 'nova.virt.libvirt.driver.connector', + fake_os_brick_connector)) + + self.start_computes({ + 'start_host': fakelibvirt.HostInfo( + cpu_nodes=1, cpu_sockets=1, cpu_cores=4, cpu_threads=2, + kB_mem=10740000), + 'end_host': fakelibvirt.HostInfo( + cpu_nodes=1, cpu_sockets=1, cpu_cores=4, cpu_threads=2, + kB_mem=10740000)}) + + self.ctxt = context.get_admin_context() + # TODO(sean-k-mooney): remove this when it is part of ServersTestBase + self.useFixture(fixtures.MonkeyPatch( + 'nova.tests.unit.virt.libvirt.fakelibvirt.Domain.migrateToURI3', + self._migrate_stub)) + + def _migrate_stub(self, domain, destination, params, flags): + """Stub out migrateToURI3.""" + + src_hostname = domain._connection.hostname + dst_hostname = urlparse.urlparse(destination).netloc + + # In a real live migration, libvirt and QEMU on the source and + # destination talk it out, resulting in the instance starting to exist + # on the destination. Fakelibvirt cannot do that, so we have to + # manually create the "incoming" instance on the destination + # fakelibvirt. + dst = self.computes[dst_hostname] + dst.driver._host.get_connection().createXML( + params['destination_xml'], + 'fake-createXML-doesnt-care-about-flags') + + src = self.computes[src_hostname] + conn = src.driver._host.get_connection() + + # because migrateToURI3 is spawned in a background thread, this method + # does not block the upper nova layers. Because we don't want nova to + # think the live migration has finished until this method is done, the + # last thing we do is make fakelibvirt's Domain.jobStats() return + # VIR_DOMAIN_JOB_COMPLETED. + server = etree.fromstring( + params['destination_xml'] + ).find('./uuid').text + dom = conn.lookupByUUIDString(server) + dom.complete_job() + + def test_live_migrate(self): + server = self._create_server( + host='start_host', + networks=[{'port': self.neutron.port_1['id']}]) + + self.assertFalse( + self.neutron_api.supports_port_binding_extension(self.ctxt)) + # TODO(sean-k-mooney): extend _live_migrate to support passing a host + self.api.post_server_action( + server['id'], + { + 'os-migrateLive': { + 'host': 'end_host', + 'block_migration': 'auto' + } + } + ) + + self._wait_for_server_parameter( + server, {'OS-EXT-SRV-ATTR:host': 'end_host', 'status': 'ACTIVE'}) + msg = "NotImplementedError: Cannot load 'vif_type' in the base class" + self.assertNotIn(msg, self.stdlog.logger.output) diff -Nru nova-21.1.2/nova/tests/functional/regressions/test_bug_1893284.py nova-21.2.0/nova/tests/functional/regressions/test_bug_1893284.py --- nova-21.1.2/nova/tests/functional/regressions/test_bug_1893284.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-21.2.0/nova/tests/functional/regressions/test_bug_1893284.py 2021-03-15 10:39:50.000000000 +0000 @@ -0,0 +1,94 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova import test +from nova.tests import fixtures as nova_fixtures +from nova.tests.functional.api import client as api_client +from nova.tests.functional import fixtures as func_fixtures +from nova.tests.functional import integrated_helpers +from nova.tests.unit.image import fake as fake_image +from nova.tests.unit import policy_fixture + + +class TestServersPerUserQuota(test.TestCase, + integrated_helpers.InstanceHelperMixin): + """This tests a regression introduced in the Pike release. + + In Pike we started counting resources for quota limit checking instead of + tracking usages in a separate database table. As part of that change, + per-user quota functionality was broken for server creates. + + When mulitple users in the same project have per-user quota, they are meant + to be allowed to create resources such that may not exceed their + per-user quota nor their project quota. + + If a project has an 'instances' quota of 10 and user A has a quota of 1 + and user B has a quota of 1, both users should each be able to create 1 + server. + + Because of the bug, in this scenario user A will succeed in creating a + server but user B will fail to create a server with a 403 "quota exceeded" + error because the 'instances' resource count isn't being correctly scoped + per-user. + """ + def setUp(self): + super(TestServersPerUserQuota, self).setUp() + self.useFixture(policy_fixture.RealPolicyFixture()) + self.useFixture(nova_fixtures.NeutronFixture(self)) + self.useFixture(func_fixtures.PlacementFixture()) + + api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( + api_version='v2.1')) + self.api = api_fixture.api + self.admin_api = api_fixture.admin_api + self.api.microversion = '2.37' # so we can specify networks='none' + self.admin_api.microversion = '2.37' + + fake_image.stub_out_image_service(self) + self.addCleanup(fake_image.FakeImageService_reset) + + self.start_service('conductor') + self.start_service('scheduler') + self.start_service('compute') + + def test_create_server_with_per_user_quota(self): + # Set per-user quota for the non-admin user to allow 1 instance. + # The default quota for the project is 10 instances. + quotas = {'instances': 1} + self.admin_api.update_quota( + quotas, project_id=self.api.project_id, user_id=self.api.auth_user) + # Verify that the non-admin user has a quota limit of 1 instance. + quotas = self.api.get_quota_detail(user_id=self.api.auth_user) + self.assertEqual(1, quotas['instances']['limit']) + # Verify that the admin user has a quota limit of 10 instances. + quotas = self.api.get_quota_detail(user_id=self.admin_api.auth_user) + self.assertEqual(10, quotas['instances']['limit']) + # Boot one instance into the default project as the admin user. + # This results in usage of 1 instance for the project and 1 instance + # for the admin user. + self._create_server( + image_uuid=fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, + networks='none', api=self.admin_api) + # Now try to boot an instance as the non-admin user. + # This should succeed because the non-admin user has 0 instances and + # the project limit allows 10 instances. + server_req = self._build_server( + image_uuid=fake_image.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID, + networks='none') + server = self.api.post_server({'server': server_req}) + self._wait_for_state_change(server, 'ACTIVE') + # A request to boot a second instance should fail because the + # non-admin has already booted 1 allowed instance. + ex = self.assertRaises( + api_client.OpenStackApiException, self.api.post_server, + {'server': server_req}) + self.assertEqual(403, ex.response.status_code) diff -Nru nova-21.1.2/nova/tests/functional/regressions/test_bug_1896463.py nova-21.2.0/nova/tests/functional/regressions/test_bug_1896463.py --- nova-21.1.2/nova/tests/functional/regressions/test_bug_1896463.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-21.2.0/nova/tests/functional/regressions/test_bug_1896463.py 2021-03-15 10:39:50.000000000 +0000 @@ -0,0 +1,224 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import copy +import fixtures +import time + +from oslo_config import cfg + +from nova import context +from nova import objects +from nova import test +from nova.tests import fixtures as nova_fixtures +from nova.tests.functional import fixtures as func_fixtures +from nova.tests.functional import integrated_helpers +from nova.tests.unit.image import fake as fake_image +from nova import utils +from nova.virt import fake + + +CONF = cfg.CONF + + +class TestEvacuateResourceTrackerRace( + test.TestCase, integrated_helpers.InstanceHelperMixin, +): + """Demonstrate bug #1896463. + + Trigger a race condition between an almost finished evacuation that is + dropping the migration context, and the _update_available_resource() + periodic task that already loaded the instance list but haven't loaded the + migration list yet. The result is that the PCI allocation made by the + evacuation is deleted by the overlapping periodic task run and the instance + will not have PCI allocation after the evacuation. + """ + + def setUp(self): + super().setUp() + self.neutron = self.useFixture(nova_fixtures.NeutronFixture(self)) + fake_image.stub_out_image_service(self) + self.addCleanup(fake_image.FakeImageService_reset) + self.placement = self.useFixture(func_fixtures.PlacementFixture()).api + + self.api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( + api_version='v2.1')) + + self.admin_api = self.api_fixture.admin_api + self.admin_api.microversion = 'latest' + self.api = self.admin_api + + self.start_service('conductor') + self.start_service('scheduler') + + self.flags(compute_driver='fake.FakeDriverWithPciResources') + self.useFixture( + fake.FakeDriverWithPciResources. + FakeDriverWithPciResourcesConfigFixture()) + + self.compute1 = self._start_compute('host1') + self.compute1_id = self._get_compute_node_id_by_host('host1') + self.compute1_service_id = self.admin_api.get_services( + host='host1', binary='nova-compute')[0]['id'] + + self.compute2 = self._start_compute('host2') + self.compute2_id = self._get_compute_node_id_by_host('host2') + self.compute2_service_id = self.admin_api.get_services( + host='host2', binary='nova-compute')[0]['id'] + + # add extra ports and the related network to the neutron fixture + # specifically for these tests. It cannot be added globally in the + # fixture init as it adds a second network that makes auto allocation + # based test to fail due to ambiguous networks. + self.neutron._ports[self.neutron.sriov_port['id']] = \ + copy.deepcopy(self.neutron.sriov_port) + self.neutron._networks[ + self.neutron.network_2['id']] = self.neutron.network_2 + self.neutron._subnets[ + self.neutron.subnet_2['id']] = self.neutron.subnet_2 + + self.ctxt = context.get_admin_context() + + def _get_compute_node_id_by_host(self, host): + # we specifically need the integer id of the node not the UUID so we + # need to use the old microversion + with utils.temporary_mutation(self.admin_api, microversion='2.52'): + hypers = self.admin_api.api_get( + 'os-hypervisors').body['hypervisors'] + for hyper in hypers: + if hyper['hypervisor_hostname'] == host: + return hyper['id'] + + self.fail('Hypervisor with hostname=%s not found' % host) + + def _assert_pci_device_allocated( + self, instance_uuid, compute_node_id, num=1): + """Assert that a given number of PCI devices are allocated to the + instance on the given host. + """ + + devices = objects.PciDeviceList.get_by_instance_uuid( + self.ctxt, instance_uuid) + devices_on_host = [dev for dev in devices + if dev.compute_node_id == compute_node_id] + self.assertEqual(num, len(devices_on_host)) + + def test_evacuate_races_with_update_available_resource(self): + # Create a server with a direct port to have PCI allocation + server = self._create_server( + name='test-server-for-bug-1896463', + networks=[{'port': self.neutron.sriov_port['id']}], + host='host1' + ) + + self._assert_pci_device_allocated(server['id'], self.compute1_id) + self._assert_pci_device_allocated( + server['id'], self.compute2_id, num=0) + + # stop and force down the compute the instance is on to allow + # evacuation + self.compute1.stop() + self.admin_api.put_service( + self.compute1_service_id, {'forced_down': 'true'}) + + # Inject some sleeps both in the Instance.drop_migration_context and + # the MigrationList.get_in_progress_by_host_and_node code to make them + # overlap. + # We want to create the following execution scenario: + # 1) The evacuation makes a move claim on the dest including the PCI + # claim. This means there is a migration context. But the evacuation + # is not complete yet so the instance.host does not point to the + # dest host. + # 2) The dest resource tracker starts an _update_available_resource() + # periodic task and this task loads the list of instances on its + # host from the DB. Our instance is not in this list due to #1. + # 3) The evacuation finishes, the instance.host is set to the dest host + # and the migration context is deleted. + # 4) The periodic task now loads the list of in-progress migration from + # the DB to check for incoming our outgoing migrations. However due + # to #3 our instance is not in this list either. + # 5) The periodic task cleans up every lingering PCI claim that is not + # connected to any instance collected above from the instance list + # and from the migration list. As our instance is not in either of + # the lists, the resource tracker cleans up the PCI allocation for + # the already finished evacuation of our instance. + # + # Unfortunately we cannot reproduce the above situation without sleeps. + # We need that the evac starts first then the periodic starts, but not + # finishes, then evac finishes, then periodic finishes. If I trigger + # and run the whole periodic in a wrapper of drop_migration_context + # then I could not reproduce the situation described at #4). In general + # it is not + # + # evac + # | + # | + # | periodic + # | | + # | | + # | x + # | + # | + # x + # + # but + # + # evac + # | + # | + # | periodic + # | | + # | | + # | | + # x | + # | + # x + # + # what is needed need. + # + # Starting the periodic from the test in a separate thread at + # drop_migration_context() might work but that is an extra complexity + # in the test code. Also it might need a sleep still to make the + # reproduction stable but only one sleep instead of two. + orig_drop = objects.Instance.drop_migration_context + + def slow_drop(*args, **kwargs): + time.sleep(1) + return orig_drop(*args, **kwargs) + + self.useFixture( + fixtures.MockPatch( + 'nova.objects.instance.Instance.drop_migration_context', + new=slow_drop)) + + orig_get_mig = objects.MigrationList.get_in_progress_by_host_and_node + + def slow_get_mig(*args, **kwargs): + time.sleep(2) + return orig_get_mig(*args, **kwargs) + + self.useFixture( + fixtures.MockPatch( + 'nova.objects.migration.MigrationList.' + 'get_in_progress_by_host_and_node', + new=slow_get_mig)) + + self.admin_api.post_server_action(server['id'], {'evacuate': {}}) + # we trigger the _update_available_resource periodic to overlap with + # the already started evacuation + self._run_periodics() + + self._wait_for_server_parameter( + server, {'OS-EXT-SRV-ATTR:host': 'host2', 'status': 'ACTIVE'}) + + self._assert_pci_device_allocated(server['id'], self.compute1_id) + self._assert_pci_device_allocated(server['id'], self.compute2_id) diff -Nru nova-21.1.2/nova/tests/functional/regressions/test_bug_1914777.py nova-21.2.0/nova/tests/functional/regressions/test_bug_1914777.py --- nova-21.1.2/nova/tests/functional/regressions/test_bug_1914777.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-21.2.0/nova/tests/functional/regressions/test_bug_1914777.py 2021-03-15 10:39:50.000000000 +0000 @@ -0,0 +1,139 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova import context as nova_context +from nova import exception +from nova import objects +from nova import test +from nova.tests import fixtures as nova_fixtures +from nova.tests.functional import integrated_helpers +import nova.tests.unit.image.fake +from nova.tests.unit import policy_fixture + + +class TestDeleteWhileBooting(test.TestCase, + integrated_helpers.InstanceHelperMixin): + """This tests race scenarios where an instance is deleted while booting. + + In these scenarios, the nova-api service is racing with nova-conductor + service; nova-conductor is in the middle of booting the instance when + nova-api begins fulfillment of a delete request. As the two services + delete records out from under each other, both services need to handle + it properly such that a delete request will always be fulfilled. + + Another scenario where two requests can race and delete things out from + under each other is if two or more delete requests are racing while the + instance is booting. + + In order to force things into states where bugs have occurred, we must + mock some object retrievals from the database to simulate the different + points at which a delete request races with a create request or another + delete request. We aim to mock only the bare minimum necessary to recreate + the bug scenarios. + """ + def setUp(self): + super(TestDeleteWhileBooting, self).setUp() + self.useFixture(policy_fixture.RealPolicyFixture()) + self.useFixture(nova_fixtures.NeutronFixture(self)) + nova.tests.unit.image.fake.stub_out_image_service(self) + + api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( + api_version='v2.1')) + self.api = api_fixture.api + + self.ctxt = nova_context.get_context() + + # We intentionally do not start a conductor or scheduler service, since + # our goal is to simulate an instance that has not been scheduled yet. + + # Kick off a server create request and move on once it's in the BUILD + # state. Since we have no conductor or scheduler service running, the + # server will "hang" in an unscheduled state for testing. + self.server = self._create_server(expected_state='BUILD') + # Simulate that a different request has deleted the build request + # record after this delete request has begun processing. (The first + # lookup of the build request occurs in the servers API to get the + # instance object in order to delete it). + # We need to get the build request now before we mock the method. + self.br = objects.BuildRequest.get_by_instance_uuid( + self.ctxt, self.server['id']) + + @mock.patch('nova.objects.build_request.BuildRequest.get_by_instance_uuid') + def test_build_request_and_instance_not_found(self, mock_get_br): + """This tests a scenario where another request has deleted the build + request record and the instance record ahead of us. + """ + # The first lookup at the beginning of the delete request in the + # ServersController succeeds and the second lookup to handle "delete + # while booting" in compute/api fails after a different request has + # deleted it. + br_not_found = exception.BuildRequestNotFound(uuid=self.server['id']) + mock_get_br.side_effect = [self.br, br_not_found, br_not_found] + self._delete_server(self.server) + + @mock.patch('nova.objects.build_request.BuildRequest.get_by_instance_uuid') + @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') + @mock.patch('nova.objects.instance.Instance.get_by_uuid') + def test_deleting_instance_at_the_same_time(self, mock_get_i, mock_get_im, + mock_get_br): + """This tests the scenario where another request is trying to delete + the instance record at the same time we are, while the instance is + booting. An example of this: while the create and delete are running at + the same time, the delete request deletes the build request, the create + request finds the build request already deleted when it tries to delete + it. The create request deletes the instance record and then delete + request tries to lookup the instance after it deletes the build + request. Its attempt to lookup the instance fails because the create + request already deleted it. + """ + # First lookup at the beginning of the delete request in the + # ServersController succeeds, second lookup to handle "delete while + # booting" in compute/api fails after the conductor has deleted it. + br_not_found = exception.BuildRequestNotFound(uuid=self.server['id']) + mock_get_br.side_effect = [self.br, br_not_found] + # Simulate the instance transitioning from having no cell assigned to + # having a cell assigned while the delete request is being processed. + # First lookup of the instance mapping has the instance unmapped (no + # cell) and subsequent lookups have the instance mapped to cell1. + no_cell_im = objects.InstanceMapping( + context=self.ctxt, instance_uuid=self.server['id'], + cell_mapping=None) + has_cell_im = objects.InstanceMapping( + context=self.ctxt, instance_uuid=self.server['id'], + cell_mapping=self.cell_mappings['cell1']) + mock_get_im.side_effect = [ + no_cell_im, has_cell_im, has_cell_im, has_cell_im, has_cell_im] + # Simulate that the instance object has been created by the conductor + # in the create path while the delete request is being processed. + # First lookups are before the instance has been deleted and the last + # lookup is after the conductor has deleted the instance. Use the build + # request to make an instance object for testing. + i = self.br.get_new_instance(self.ctxt) + i_not_found = exception.InstanceNotFound(instance_id=self.server['id']) + mock_get_i.side_effect = [i, i, i, i_not_found, i_not_found] + + # Simulate that the conductor is running instance_destroy at the same + # time as we are. + def fake_instance_destroy(*args, **kwargs): + # NOTE(melwitt): This is a misleading exception, as it is not only + # raised when a constraint on 'host' is not met, but also when two + # instance_destroy calls are racing. In this test, the soft delete + # returns 0 rows affected because another request soft deleted the + # record first. + raise exception.ObjectActionError( + action='destroy', reason='host changed') + + self.stub_out( + 'nova.objects.instance.Instance.destroy', fake_instance_destroy) + self._delete_server(self.server) diff -Nru nova-21.1.2/nova/tests/functional/test_servers.py nova-21.2.0/nova/tests/functional/test_servers.py --- nova-21.1.2/nova/tests/functional/test_servers.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/functional/test_servers.py 2021-03-15 10:39:50.000000000 +0000 @@ -42,6 +42,8 @@ from nova.network import neutron as neutronapi from nova import objects from nova.objects import block_device as block_device_obj +from nova.policies import base as base_policies +from nova.policies import servers as servers_policies from nova.scheduler import utils from nova import test from nova.tests import fixtures as nova_fixtures @@ -8258,3 +8260,117 @@ 'OS-DCF:diskConfig': 'AUTO'}}) self.assertEqual(403, ex.response.status_code) self._check_allocations_usage(self.server) + + +class CrossCellResizeWithQoSPort(PortResourceRequestBasedSchedulingTestBase): + NUMBER_OF_CELLS = 2 + + def setUp(self): + # Use our custom weigher defined above to make sure that we have + # a predictable host order in the alternate list returned by the + # scheduler for migration. + self.useFixture(nova_fixtures.HostNameWeigherFixture()) + super(CrossCellResizeWithQoSPort, self).setUp() + # start compute2 in cell2, compute1 is started in cell1 by default + self.compute2 = self._start_compute('host2', cell_name='cell2') + self.compute2_rp_uuid = self._get_provider_uuid_by_host('host2') + self._create_networking_rp_tree('host2', self.compute2_rp_uuid) + self.compute2_service_id = self.admin_api.get_services( + host='host2', binary='nova-compute')[0]['id'] + + # Enable cross-cell resize policy since it defaults to not allow + # anyone to perform that type of operation. For these tests we'll + # just allow admins to perform cross-cell resize. + self.policy.set_rules({ + servers_policies.CROSS_CELL_RESIZE: + base_policies.RULE_ADMIN_API}, + overwrite=False) + + def test_cross_cell_migrate_server_with_qos_ports(self): + """Test that cross cell migration is not supported with qos ports and + nova therefore falls back to do a same cell migration instead. + To test this properly we first make sure that there is no valid host + in the same cell but there is valid host in another cell and observe + that the migration fails with NoValidHost. Then we start a new compute + in the same cell the instance is in and retry the migration that is now + expected to pass. + """ + + non_qos_normal_port = self.neutron.port_1 + qos_normal_port = self.neutron.port_with_resource_request + qos_sriov_port = self.neutron.port_with_sriov_resource_request + + server = self._create_server_with_ports_and_check_allocation( + non_qos_normal_port, qos_normal_port, qos_sriov_port) + + orig_create_binding = neutronapi.API._create_port_binding + + hosts = { + 'host1': self.compute1_rp_uuid, 'host2': self.compute2_rp_uuid} + + # Add an extra check to our neutron fixture. This check makes sure that + # the RP sent in the binding corresponds to host of the binding. In a + # real deployment this is checked by the Neutron server. As bug + # 1907522 showed we fail this check for cross cell migration with qos + # ports in a real deployment. So to reproduce that bug we need to have + # the same check in our test env too. + def spy_on_create_binding(context, client, port_id, data): + host_rp_uuid = hosts[data['binding']['host']] + device_rp_uuid = data['binding']['profile'].get('allocation') + if port_id == qos_normal_port['id']: + if device_rp_uuid != self.ovs_bridge_rp_per_host[host_rp_uuid]: + raise exception.PortBindingFailed(port_id=port_id) + elif port_id == qos_sriov_port['id']: + if (device_rp_uuid not in + self.sriov_dev_rp_per_host[host_rp_uuid].values()): + raise exception.PortBindingFailed(port_id=port_id) + + return orig_create_binding(context, client, port_id, data) + + with mock.patch( + 'nova.network.neutron.API._create_port_binding', + side_effect=spy_on_create_binding, autospec=True + ): + # We expect the migration to fail as the only available target + # host is in a different cell and while cross cell migration is + # enabled it is not supported for neutron ports with resource + # request. + self.api.post_server_action(server['id'], {'migrate': None}) + self._wait_for_migration_status(server, ['error']) + self._wait_for_server_parameter( + server, + {'status': 'ACTIVE', 'OS-EXT-SRV-ATTR:host': 'host1'}) + event = self._wait_for_action_fail_completion( + server, 'migrate', 'conductor_migrate_server') + self.assertIn( + 'exception.NoValidHost', event['traceback']) + log = self.stdlog.logger.output + self.assertIn( + 'Request is allowed by policy to perform cross-cell resize ' + 'but the instance has ports with resource request and ' + 'cross-cell resize is not supported with such ports.', + log) + self.assertNotIn( + 'nova.exception.PortBindingFailed: Binding failed for port', + log) + self.assertNotIn( + "AttributeError: 'NoneType' object has no attribute 'version'", + log) + + # Now start a new compute in the same cell as the instance and retry + # the migration. + self._start_compute('host3', cell_name='cell1') + self.compute3_rp_uuid = self._get_provider_uuid_by_host('host3') + self._create_networking_rp_tree('host3', self.compute3_rp_uuid) + + with mock.patch( + 'nova.network.neutron.API._create_port_binding', + side_effect=spy_on_create_binding, autospec=True + ): + self.api.post_server_action(server['id'], {'migrate': None}) + server = self._wait_for_state_change(server, 'VERIFY_RESIZE') + + self.assertEqual('host3', server['OS-EXT-SRV-ATTR:host']) + + self._delete_server_and_check_allocations( + server, qos_normal_port, qos_sriov_port) diff -Nru nova-21.1.2/nova/tests/functional/test_service.py nova-21.2.0/nova/tests/functional/test_service.py --- nova-21.1.2/nova/tests/functional/test_service.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/functional/test_service.py 2021-03-15 10:39:50.000000000 +0000 @@ -10,7 +10,10 @@ # License for the specific language governing permissions and limitations # under the License. +from unittest import mock + from nova import context as nova_context +from nova.objects import service from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.functional import fixtures as func_fixtures @@ -98,3 +101,40 @@ self.metadata.start() # Cell cache should be empty after the service reset. self.assertEqual({}, nova_context.CELL_CACHE) + + +class TestOldComputeCheck( + test.TestCase, integrated_helpers.InstanceHelperMixin): + + def test_conductor_warns_if_old_compute(self): + old_version = service.SERVICE_VERSION_ALIASES[ + service.OLDEST_SUPPORTED_SERVICE_VERSION] - 1 + with mock.patch( + "nova.objects.service.get_minimum_version_all_cells", + return_value=old_version): + self.start_service('conductor') + self.assertIn( + 'Current Nova version does not support computes older than', + self.stdlog.logger.output) + + def test_api_warns_if_old_compute(self): + old_version = service.SERVICE_VERSION_ALIASES[ + service.OLDEST_SUPPORTED_SERVICE_VERSION] - 1 + with mock.patch( + "nova.objects.service.get_minimum_version_all_cells", + return_value=old_version): + self.useFixture(nova_fixtures.OSAPIFixture(api_version='v2.1')) + self.assertIn( + 'Current Nova version does not support computes older than', + self.stdlog.logger.output) + + def test_compute_warns_if_old_compute(self): + old_version = service.SERVICE_VERSION_ALIASES[ + service.OLDEST_SUPPORTED_SERVICE_VERSION] - 1 + with mock.patch( + "nova.objects.service.get_minimum_version_all_cells", + return_value=old_version): + self._start_compute('host1') + self.assertIn( + 'Current Nova version does not support computes older than', + self.stdlog.logger.output) diff -Nru nova-21.1.2/nova/tests/unit/api/openstack/test_requestlog.py nova-21.2.0/nova/tests/unit/api/openstack/test_requestlog.py --- nova-21.1.2/nova/tests/unit/api/openstack/test_requestlog.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/api/openstack/test_requestlog.py 2021-03-15 10:39:50.000000000 +0000 @@ -42,7 +42,8 @@ # this is the minimal set of magic mocks needed to convince # the API service it can start on it's own without a database. mocks = ['nova.objects.Service.get_by_host_and_binary', - 'nova.objects.Service.create'] + 'nova.objects.Service.create', + 'nova.utils.raise_if_old_compute'] self.stdlog = fixtures.StandardLogging() self.useFixture(self.stdlog) for m in mocks: diff -Nru nova-21.1.2/nova/tests/unit/cmd/test_status.py nova-21.2.0/nova/tests/unit/cmd/test_status.py --- nova-21.1.2/nova/tests/unit/cmd/test_status.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/cmd/test_status.py 2021-03-15 10:39:50.000000000 +0000 @@ -39,6 +39,7 @@ # NOTE(mriedem): We only use objects as a convenience to populate the database # in the tests, we don't use them in the actual CLI. from nova import objects +from nova.objects import service from nova import policy from nova import test from nova.tests import fixtures as nova_fixtures @@ -602,3 +603,32 @@ def setUp(self): super(TestUpgradeCheckPolicyEnableScope, self).setUp() self.flags(enforce_scope=True, group="oslo_policy") + + +class TestUpgradeCheckOldCompute(test.NoDBTestCase): + + def setUp(self): + super(TestUpgradeCheckOldCompute, self).setUp() + self.cmd = status.UpgradeCommands() + + def test_no_compute(self): + self.assertEqual( + upgradecheck.Code.SUCCESS, self.cmd._check_old_computes().code) + + def test_only_new_compute(self): + last_supported_version = service.SERVICE_VERSION_ALIASES[ + service.OLDEST_SUPPORTED_SERVICE_VERSION] + with mock.patch( + "nova.objects.service.get_minimum_version_all_cells", + return_value=last_supported_version): + self.assertEqual( + upgradecheck.Code.SUCCESS, self.cmd._check_old_computes().code) + + def test_old_compute(self): + too_old = service.SERVICE_VERSION_ALIASES[ + service.OLDEST_SUPPORTED_SERVICE_VERSION] - 1 + with mock.patch( + "nova.objects.service.get_minimum_version_all_cells", + return_value=too_old): + result = self.cmd._check_old_computes() + self.assertEqual(upgradecheck.Code.WARNING, result.code) diff -Nru nova-21.1.2/nova/tests/unit/compute/test_compute_api.py nova-21.2.0/nova/tests/unit/compute/test_compute_api.py --- nova-21.1.2/nova/tests/unit/compute/test_compute_api.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/compute/test_compute_api.py 2021-03-15 10:39:50.000000000 +0000 @@ -231,6 +231,8 @@ requested_networks=requested_networks, max_count=None) + @mock.patch('nova.objects.Quotas.get_all_by_project_and_user', + new=mock.MagicMock()) @mock.patch('nova.objects.Quotas.count_as_dict') @mock.patch('nova.objects.Quotas.limit_check') @mock.patch('nova.objects.Quotas.limit_check_project_and_user') @@ -4195,6 +4197,8 @@ self._test_check_injected_file_quota_onset_file_limit_exceeded, side_effect) + @mock.patch('nova.objects.Quotas.get_all_by_project_and_user', + new=mock.MagicMock()) @mock.patch('nova.objects.Quotas.count_as_dict') @mock.patch('nova.objects.Quotas.limit_check_project_and_user') @mock.patch('nova.objects.Instance.save') @@ -4219,9 +4223,11 @@ self.assertEqual(instance.task_state, task_states.RESTORING) # mock.ANY might be 'instances', 'cores', or 'ram' depending on how the # deltas dict is iterated in check_deltas + # user_id is expected to be None because no per-user quotas have been + # defined quota_count.assert_called_once_with(admin_context, mock.ANY, instance.project_id, - user_id=instance.user_id) + user_id=None) quota_check.assert_called_once_with( admin_context, user_values={'instances': 2, @@ -4230,9 +4236,11 @@ project_values={'instances': 2, 'cores': 1 + instance.flavor.vcpus, 'ram': 512 + instance.flavor.memory_mb}, - project_id=instance.project_id, user_id=instance.user_id) - update_qfd.assert_called_once_with(admin_context, instance, False) + project_id=instance.project_id) + update_qfd.assert_called_once_with(admin_context, instance.uuid, False) + @mock.patch('nova.objects.Quotas.get_all_by_project_and_user', + new=mock.MagicMock()) @mock.patch('nova.objects.Quotas.count_as_dict') @mock.patch('nova.objects.Quotas.limit_check_project_and_user') @mock.patch('nova.objects.Instance.save') @@ -4256,9 +4264,11 @@ self.assertEqual(instance.task_state, task_states.RESTORING) # mock.ANY might be 'instances', 'cores', or 'ram' depending on how the # deltas dict is iterated in check_deltas + # user_id is expected to be None because no per-user quotas have been + # defined quota_count.assert_called_once_with(self.context, mock.ANY, instance.project_id, - user_id=instance.user_id) + user_id=None) quota_check.assert_called_once_with( self.context, user_values={'instances': 2, @@ -4267,8 +4277,8 @@ project_values={'instances': 2, 'cores': 1 + instance.flavor.vcpus, 'ram': 512 + instance.flavor.memory_mb}, - project_id=instance.project_id, user_id=instance.user_id) - update_qfd.assert_called_once_with(self.context, instance, False) + project_id=instance.project_id) + update_qfd.assert_called_once_with(self.context, instance.uuid, False) @mock.patch.object(objects.InstanceAction, 'action_start') def test_external_instance_event(self, mock_action_start): @@ -6006,7 +6016,8 @@ inst = objects.Instance(uuid=uuid) im = objects.InstanceMapping(instance_uuid=uuid) mock_get.return_value = im - self.compute_api._update_queued_for_deletion(self.context, inst, True) + self.compute_api._update_queued_for_deletion( + self.context, inst.uuid, True) self.assertTrue(im.queued_for_delete) mock_get.assert_called_once_with(self.context, inst.uuid) mock_save.assert_called_once_with() @@ -7555,7 +7566,8 @@ version is not new enough. """ instance = objects.Instance( - project_id='fake-project', user_id='fake-user') + project_id='fake-project', user_id='fake-user', + uuid=uuids.instance) with mock.patch.object(self.context, 'can', return_value=True) as can: self.assertFalse(self.compute_api._allow_cross_cell_resize( self.context, instance)) @@ -7563,20 +7575,45 @@ mock_get_min_ver.assert_called_once_with( self.context, ['nova-compute']) + @mock.patch('nova.network.neutron.API.get_requested_resource_for_instance', + return_value=[objects.RequestGroup()]) @mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=compute_api.MIN_COMPUTE_CROSS_CELL_RESIZE) - def test_allow_cross_cell_resize_true(self, mock_get_min_ver): + def test_allow_cross_cell_resize_false_port_with_resource_req( + self, mock_get_min_ver, mock_get_res_req): + """Policy allows cross-cell resize but minimum nova-compute service + version is not new enough. + """ + instance = objects.Instance( + project_id='fake-project', user_id='fake-user', + uuid=uuids.instance) + with mock.patch.object(self.context, 'can', return_value=True) as can: + self.assertFalse(self.compute_api._allow_cross_cell_resize( + self.context, instance)) + can.assert_called_once() + mock_get_min_ver.assert_called_once_with( + self.context, ['nova-compute']) + mock_get_res_req.assert_called_once_with(self.context, uuids.instance) + + @mock.patch('nova.network.neutron.API.get_requested_resource_for_instance', + return_value=[]) + @mock.patch('nova.objects.service.get_minimum_version_all_cells', + return_value=compute_api.MIN_COMPUTE_CROSS_CELL_RESIZE) + def test_allow_cross_cell_resize_true( + self, mock_get_min_ver, mock_get_res_req): """Policy allows cross-cell resize and minimum nova-compute service version is new enough. """ instance = objects.Instance( - project_id='fake-project', user_id='fake-user') + project_id='fake-project', user_id='fake-user', + uuid=uuids.instance) with mock.patch.object(self.context, 'can', return_value=True) as can: self.assertTrue(self.compute_api._allow_cross_cell_resize( self.context, instance)) can.assert_called_once() mock_get_min_ver.assert_called_once_with( self.context, ['nova-compute']) + mock_get_res_req.assert_called_once_with(self.context, uuids.instance) def _test_block_accelerators(self, instance, args_info): @compute_api.block_accelerators diff -Nru nova-21.1.2/nova/tests/unit/compute/test_compute_mgr.py nova-21.2.0/nova/tests/unit/compute/test_compute_mgr.py --- nova-21.1.2/nova/tests/unit/compute/test_compute_mgr.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/compute/test_compute_mgr.py 2021-03-15 10:39:50.000000000 +0000 @@ -15,6 +15,7 @@ import contextlib import copy import datetime +import fixtures as std_fixtures import time from cinderclient import exceptions as cinder_exception @@ -1077,6 +1078,11 @@ "time this service is starting on this host, then you can ignore " "this warning.", 'fake-node1') + def test_init_host_disk_devices_configuration_failure(self): + self.flags(max_disk_devices_to_attach=0, group='compute') + self.assertRaises(exception.InvalidConfiguration, + self.compute.init_host) + @mock.patch.object(objects.InstanceList, 'get_by_host', new=mock.Mock()) @mock.patch('nova.compute.manager.ComputeManager.' @@ -3246,20 +3252,48 @@ mock_event.assert_called_once_with( self.context, 'compute_check_can_live_migrate_destination', CONF.host, instance.uuid, graceful_exit=False) + return result def test_check_can_live_migrate_destination_success(self): + self.useFixture(std_fixtures.MonkeyPatch( + 'nova.network.neutron.API.supports_port_binding_extension', + lambda *args: True)) self._test_check_can_live_migrate_destination() def test_check_can_live_migrate_destination_fail(self): + self.useFixture(std_fixtures.MonkeyPatch( + 'nova.network.neutron.API.supports_port_binding_extension', + lambda *args: True)) self.assertRaises( - test.TestingException, - self._test_check_can_live_migrate_destination, - do_raise=True) + test.TestingException, + self._test_check_can_live_migrate_destination, + do_raise=True) + + def test_check_can_live_migrate_destination_contins_vifs(self): + self.useFixture(std_fixtures.MonkeyPatch( + 'nova.network.neutron.API.supports_port_binding_extension', + lambda *args: True)) + migrate_data = self._test_check_can_live_migrate_destination() + self.assertIn('vifs', migrate_data) + self.assertIsNotNone(migrate_data.vifs) + + def test_check_can_live_migrate_destination_no_binding_extended(self): + self.useFixture(std_fixtures.MonkeyPatch( + 'nova.network.neutron.API.supports_port_binding_extension', + lambda *args: False)) + migrate_data = self._test_check_can_live_migrate_destination() + self.assertNotIn('vifs', migrate_data) def test_check_can_live_migrate_destination_src_numa_lm_false(self): + self.useFixture(std_fixtures.MonkeyPatch( + 'nova.network.neutron.API.supports_port_binding_extension', + lambda *args: True)) self._test_check_can_live_migrate_destination(src_numa_lm=False) def test_check_can_live_migrate_destination_src_numa_lm_true(self): + self.useFixture(std_fixtures.MonkeyPatch( + 'nova.network.neutron.API.supports_port_binding_extension', + lambda *args: True)) self._test_check_can_live_migrate_destination(src_numa_lm=True) def test_dest_can_numa_live_migrate(self): @@ -5180,18 +5214,21 @@ node = uuidutils.generate_uuid() # ironic node uuid instance = fake_instance.fake_instance_obj(self.context, node=node) instance.migration_context = None + migration = objects.Migration(status='accepted') with test.nested( mock.patch.object(self.compute, '_get_compute_info'), mock.patch.object(self.compute, '_do_rebuild_instance_with_claim'), mock.patch.object(objects.Instance, 'save'), - mock.patch.object(self.compute, '_set_migration_status'), - ) as (mock_get, mock_rebuild, mock_save, mock_set): - self.compute.rebuild_instance(self.context, instance, None, None, - None, None, None, None, False, - False, False, None, None, {}, None) + mock.patch.object(migration, 'save'), + ) as (mock_get, mock_rebuild, mock_save, mock_migration_save): + self.compute.rebuild_instance( + self.context, instance, None, None, + None, None, None, None, False, + False, False, migration, None, {}, None) self.assertFalse(mock_get.called) self.assertEqual(node, instance.node) - mock_set.assert_called_once_with(None, 'done') + self.assertEqual('done', migration.status) + mock_migration_save.assert_called_once_with() def test_rebuild_node_updated_if_recreate(self): dead_node = uuidutils.generate_uuid() @@ -5204,16 +5241,15 @@ with test.nested( mock.patch.object(self.compute, '_get_compute_info'), mock.patch.object(self.compute, '_do_rebuild_instance'), - mock.patch.object(objects.Instance, 'save'), - mock.patch.object(self.compute, '_set_migration_status'), - ) as (mock_get, mock_rebuild, mock_save, mock_set): + ) as (mock_get, mock_rebuild): mock_get.return_value.hypervisor_hostname = 'new-node' - self.compute.rebuild_instance(self.context, instance, None, None, - None, None, None, None, True, - False, False, None, None, {}, None) + self.compute.rebuild_instance( + self.context, instance, None, None, None, None, None, + None, True, False, False, mock.sentinel.migration, None, {}, + None) mock_get.assert_called_once_with(mock.ANY, self.compute.host) - self.assertEqual('new-node', instance.node) - mock_set.assert_called_once_with(None, 'done') + mock_rt.finish_evacuation.assert_called_once_with( + instance, 'new-node', mock.sentinel.migration) # Make sure the rebuild_claim was called with the proper image_meta # from the instance. mock_rt.rebuild_claim.assert_called_once() @@ -9103,12 +9139,18 @@ """Tests the various ways that _get_neutron_events_for_live_migration will return an empty list. """ + migration = mock.Mock() + migration.is_same_host = lambda: False + self.assertFalse(migration.is_same_host()) + # 1. no timeout self.flags(vif_plugging_timeout=0) with mock.patch.object(self.instance, 'get_network_info') as nw_info: nw_info.return_value = network_model.NetworkInfo( - [network_model.VIF(uuids.port1)]) + [network_model.VIF(uuids.port1, details={ + network_model.VIF_DETAILS_OVS_HYBRID_PLUG: True})]) + self.assertTrue(nw_info.return_value[0].is_hybrid_plug_enabled()) self.assertEqual( [], self.compute._get_neutron_events_for_live_migration( self.instance)) @@ -9117,7 +9159,18 @@ self.flags(vif_plugging_timeout=300) with mock.patch.object(self.instance, 'get_network_info') as nw_info: - nw_info.return_value = [] + nw_info.return_value = network_model.NetworkInfo([]) + self.assertEqual( + [], self.compute._get_neutron_events_for_live_migration( + self.instance)) + + # 3. no plug time events + with mock.patch.object(self.instance, 'get_network_info') as nw_info: + nw_info.return_value = network_model.NetworkInfo( + [network_model.VIF( + uuids.port1, details={ + network_model.VIF_DETAILS_OVS_HYBRID_PLUG: False})]) + self.assertFalse(nw_info.return_value[0].is_hybrid_plug_enabled()) self.assertEqual( [], self.compute._get_neutron_events_for_live_migration( self.instance)) @@ -9135,9 +9188,11 @@ wait_for_vif_plugged=True) mock_get_bdms.return_value = objects.BlockDeviceMappingList(objects=[]) mock_pre_live_mig.return_value = migrate_data + details = {network_model.VIF_DETAILS_OVS_HYBRID_PLUG: True} self.instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([ - network_model.VIF(uuids.port1), network_model.VIF(uuids.port2) + network_model.VIF(uuids.port1, details=details), + network_model.VIF(uuids.port2, details=details) ])) self.compute._waiting_live_migrations[self.instance.uuid] = ( self.migration, mock.MagicMock() @@ -9167,11 +9222,12 @@ of not waiting. """ migrate_data = objects.LibvirtLiveMigrateData() + details = {network_model.VIF_DETAILS_OVS_HYBRID_PLUG: True} mock_get_bdms.return_value = objects.BlockDeviceMappingList(objects=[]) mock_pre_live_mig.return_value = migrate_data self.instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([ - network_model.VIF(uuids.port1)])) + network_model.VIF(uuids.port1, details=details)])) self.compute._waiting_live_migrations[self.instance.uuid] = ( self.migration, mock.MagicMock() ) @@ -9315,9 +9371,11 @@ mock_get_bdms.return_value = source_bdms migrate_data = objects.LibvirtLiveMigrateData( wait_for_vif_plugged=True) + details = {network_model.VIF_DETAILS_OVS_HYBRID_PLUG: True} self.instance.info_cache = objects.InstanceInfoCache( network_info=network_model.NetworkInfo([ - network_model.VIF(uuids.port1), network_model.VIF(uuids.port2) + network_model.VIF(uuids.port1, details=details), + network_model.VIF(uuids.port2, details=details) ])) self.compute._waiting_live_migrations = {} fake_migration = objects.Migration( diff -Nru nova-21.1.2/nova/tests/unit/compute/test_compute.py nova-21.2.0/nova/tests/unit/compute/test_compute.py --- nova-21.1.2/nova/tests/unit/compute/test_compute.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/compute/test_compute.py 2021-03-15 10:39:50.000000000 +0000 @@ -18,6 +18,7 @@ """Tests for compute service.""" import datetime +import fixtures as std_fixtures from itertools import chain import operator import sys @@ -6178,13 +6179,19 @@ return fake_network.fake_get_instance_nw_info(self) self.stub_out('nova.network.neutron.API.get_instance_nw_info', stupid) - + self.useFixture( + std_fixtures.MonkeyPatch( + 'nova.network.neutron.API.supports_port_binding_extension', + lambda *args: True)) # creating instance testdata instance = self._create_fake_instance_obj({'host': 'dummy'}) c = context.get_admin_context() fake_notifier.NOTIFICATIONS = [] migrate_data = objects.LibvirtLiveMigrateData( is_shared_instance_path=False) + vifs = migrate_data_obj.VIFMigrateData.create_skeleton_migrate_vifs( + stupid()) + migrate_data.vifs = vifs mock_pre.return_value = migrate_data with mock.patch.object(self.compute.network_api, diff -Nru nova-21.1.2/nova/tests/unit/compute/test_compute_utils.py nova-21.2.0/nova/tests/unit/compute/test_compute_utils.py --- nova-21.1.2/nova/tests/unit/compute/test_compute_utils.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/compute/test_compute_utils.py 2021-03-15 10:39:50.000000000 +0000 @@ -1398,6 +1398,46 @@ else: self.fail("Exception not raised") + @mock.patch('nova.objects.Quotas.get_all_by_project_and_user') + @mock.patch('nova.objects.Quotas.check_deltas') + def test_check_num_instances_omits_user_if_no_user_quota(self, mock_check, + mock_get): + # Return no per-user quota. + mock_get.return_value = {'project_id': self.context.project_id, + 'user_id': self.context.user_id} + fake_flavor = objects.Flavor(vcpus=1, memory_mb=512) + compute_utils.check_num_instances_quota( + self.context, fake_flavor, 1, 1) + deltas = {'instances': 1, 'cores': 1, 'ram': 512} + # Verify that user_id has not been passed along to scope the resource + # counting. + mock_check.assert_called_once_with( + self.context, deltas, self.context.project_id, user_id=None, + check_project_id=self.context.project_id, check_user_id=None) + + @mock.patch('nova.objects.Quotas.get_all_by_project_and_user') + @mock.patch('nova.objects.Quotas.check_deltas') + def test_check_num_instances_passes_user_if_user_quota(self, mock_check, + mock_get): + for resource in ['instances', 'cores', 'ram']: + # Return some per-user quota for each of the instance-related + # resources. + mock_get.return_value = {'project_id': self.context.project_id, + 'user_id': self.context.user_id, + resource: 5} + fake_flavor = objects.Flavor(vcpus=1, memory_mb=512) + compute_utils.check_num_instances_quota( + self.context, fake_flavor, 1, 1) + deltas = {'instances': 1, 'cores': 1, 'ram': 512} + # Verify that user_id is passed along to scope the resource + # counting and limit checking. + mock_check.assert_called_once_with( + self.context, deltas, self.context.project_id, + user_id=self.context.user_id, + check_project_id=self.context.project_id, + check_user_id=self.context.user_id) + mock_check.reset_mock() + class IsVolumeBackedInstanceTestCase(test.TestCase): def setUp(self): diff -Nru nova-21.1.2/nova/tests/unit/db/test_db_api.py nova-21.2.0/nova/tests/unit/db/test_db_api.py --- nova-21.1.2/nova/tests/unit/db/test_db_api.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/db/test_db_api.py 2021-03-15 10:39:50.000000000 +0000 @@ -1693,6 +1693,14 @@ sys_meta = utils.metadata_to_dict(inst['system_metadata']) self.assertEqual(sys_meta, self.sample_data['system_metadata']) + def test_instance_get_with_meta(self): + inst_id = self.create_instance_with_args().id + inst = db.instance_get(self.ctxt, inst_id) + meta = utils.metadata_to_dict(inst['metadata']) + self.assertEqual(meta, self.sample_data['metadata']) + sys_meta = utils.metadata_to_dict(inst['system_metadata']) + self.assertEqual(sys_meta, self.sample_data['system_metadata']) + def test_instance_update(self): instance = self.create_instance_with_args() metadata = {'host': 'bar', 'key2': 'wuff'} diff -Nru nova-21.1.2/nova/tests/unit/test_fixtures.py nova-21.2.0/nova/tests/unit/test_fixtures.py --- nova-21.1.2/nova/tests/unit/test_fixtures.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/test_fixtures.py 2021-03-15 10:39:50.000000000 +0000 @@ -88,6 +88,7 @@ class TestOSAPIFixture(testtools.TestCase): @mock.patch('nova.objects.Service.get_by_host_and_binary') @mock.patch('nova.objects.Service.create') + @mock.patch('nova.utils.raise_if_old_compute', new=mock.Mock()) def test_responds_to_version(self, mock_service_create, mock_get): """Ensure the OSAPI server responds to calls sensibly.""" self.useFixture(output.CaptureOutput()) diff -Nru nova-21.1.2/nova/tests/unit/test_service.py nova-21.2.0/nova/tests/unit/test_service.py --- nova-21.1.2/nova/tests/unit/test_service.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/test_service.py 2021-03-15 10:39:50.000000000 +0000 @@ -268,6 +268,24 @@ serv.reset() mock_reset.assert_called_once_with() + @mock.patch('nova.conductor.api.API.wait_until_ready') + @mock.patch('nova.utils.raise_if_old_compute') + def test_old_compute_version_check_happens_after_wait_for_conductor( + self, mock_check_old, mock_wait): + obj_base.NovaObject.indirection_api = mock.MagicMock() + + def fake_wait(*args, **kwargs): + mock_check_old.assert_not_called() + + mock_wait.side_effect = fake_wait + + service.Service.create( + self.host, self.binary, self.topic, + 'nova.tests.unit.test_service.FakeManager') + + mock_check_old.assert_called_once_with() + mock_wait.assert_called_once_with(mock.ANY) + class TestWSGIService(test.NoDBTestCase): diff -Nru nova-21.1.2/nova/tests/unit/test_utils.py nova-21.2.0/nova/tests/unit/test_utils.py --- nova-21.1.2/nova/tests/unit/test_utils.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/test_utils.py 2021-03-15 10:39:50.000000000 +0000 @@ -38,6 +38,7 @@ from nova import exception from nova.objects import base as obj_base from nova.objects import instance as instance_obj +from nova.objects import service as service_obj from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.objects import test_objects @@ -1239,3 +1240,89 @@ self.mock_get_confgrp.assert_called_once_with(self.service_type) self.mock_connection.assert_not_called() self.mock_get_auth_sess.assert_not_called() + + +class TestOldComputeCheck(test.NoDBTestCase): + + @mock.patch('nova.objects.service.get_minimum_version_all_cells') + def test_no_compute(self, mock_get_min_service): + mock_get_min_service.return_value = 0 + + utils.raise_if_old_compute() + + mock_get_min_service.assert_called_once_with( + mock.ANY, ['nova-compute']) + + @mock.patch('nova.objects.service.get_minimum_version_all_cells') + def test_old_but_supported_compute(self, mock_get_min_service): + oldest = service_obj.SERVICE_VERSION_ALIASES[ + service_obj.OLDEST_SUPPORTED_SERVICE_VERSION] + mock_get_min_service.return_value = oldest + + utils.raise_if_old_compute() + + mock_get_min_service.assert_called_once_with( + mock.ANY, ['nova-compute']) + + @mock.patch('nova.objects.service.get_minimum_version_all_cells') + def test_new_compute(self, mock_get_min_service): + mock_get_min_service.return_value = service_obj.SERVICE_VERSION + + utils.raise_if_old_compute() + + mock_get_min_service.assert_called_once_with( + mock.ANY, ['nova-compute']) + + @mock.patch('nova.objects.service.Service.get_minimum_version') + def test_too_old_compute_cell(self, mock_get_min_service): + self.flags(group='api_database', connection=None) + oldest = service_obj.SERVICE_VERSION_ALIASES[ + service_obj.OLDEST_SUPPORTED_SERVICE_VERSION] + mock_get_min_service.return_value = oldest - 1 + + ex = self.assertRaises( + exception.TooOldComputeService, utils.raise_if_old_compute) + + self.assertIn('cell', str(ex)) + mock_get_min_service.assert_called_once_with(mock.ANY, 'nova-compute') + + @mock.patch('nova.objects.service.get_minimum_version_all_cells') + def test_too_old_compute_top_level(self, mock_get_min_service): + self.flags(group='api_database', connection='fake db connection') + oldest = service_obj.SERVICE_VERSION_ALIASES[ + service_obj.OLDEST_SUPPORTED_SERVICE_VERSION] + mock_get_min_service.return_value = oldest - 1 + + ex = self.assertRaises( + exception.TooOldComputeService, utils.raise_if_old_compute) + + self.assertIn('system', str(ex)) + mock_get_min_service.assert_called_once_with( + mock.ANY, ['nova-compute']) + + @mock.patch.object(utils.LOG, 'warning') + @mock.patch('nova.objects.service.Service.get_minimum_version') + @mock.patch('nova.objects.service.get_minimum_version_all_cells') + def test_api_db_is_configured_but_the_service_cannot_access_db( + self, mock_get_all, mock_get, mock_warn): + # This is the case when the nova-compute service is wrongly configured + # with db credentials but nova-compute is never allowed to access the + # db directly. + mock_get_all.side_effect = exception.DBNotAllowed( + binary='nova-compute') + + oldest = service_obj.SERVICE_VERSION_ALIASES[ + service_obj.OLDEST_SUPPORTED_SERVICE_VERSION] + mock_get.return_value = oldest - 1 + + ex = self.assertRaises( + exception.TooOldComputeService, utils.raise_if_old_compute) + + self.assertIn('cell', str(ex)) + mock_get_all.assert_called_once_with(mock.ANY, ['nova-compute']) + mock_get.assert_called_once_with(mock.ANY, 'nova-compute') + mock_warn.assert_called_once_with( + 'This service is configured for access to the API database but is ' + 'not allowed to directly access the database. You should run this ' + 'service without the [api_database]/connection config option. The ' + 'service version check will only query the local cell.') diff -Nru nova-21.1.2/nova/tests/unit/virt/libvirt/fake_imagebackend.py nova-21.2.0/nova/tests/unit/virt/libvirt/fake_imagebackend.py --- nova-21.1.2/nova/tests/unit/virt/libvirt/fake_imagebackend.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/virt/libvirt/fake_imagebackend.py 2021-03-15 10:39:50.000000000 +0000 @@ -184,11 +184,17 @@ # class. image_init.SUPPORTS_CLONE = False - # Ditto for the 'is_shared_block_storage' function + # Ditto for the 'is_shared_block_storage' function and + # 'is_file_in_instance_path' def is_shared_block_storage(): return False + def is_file_in_instance_path(): + return False + setattr(image_init, 'is_shared_block_storage', is_shared_block_storage) + setattr(image_init, 'is_file_in_instance_path', + is_file_in_instance_path) return image_init diff -Nru nova-21.1.2/nova/tests/unit/virt/libvirt/fakelibvirt.py nova-21.2.0/nova/tests/unit/virt/libvirt/fakelibvirt.py --- nova-21.1.2/nova/tests/unit/virt/libvirt/fakelibvirt.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/virt/libvirt/fakelibvirt.py 2021-03-15 10:39:50.000000000 +0000 @@ -836,6 +836,7 @@ self._has_saved_state = False self._snapshots = {} self._id = self._connection._id_counter + self._job_type = VIR_DOMAIN_JOB_UNBOUNDED def _parse_definition(self, xml): try: @@ -1300,7 +1301,17 @@ return [0] * 12 def jobStats(self, flags=0): - return {} + # NOTE(artom) By returning VIR_DOMAIN_JOB_UNBOUNDED, we're pretending a + # job is constantly running. Tests are expected to call the + # complete_job or fail_job methods when they're ready for jobs (read: + # live migrations) to "complete". + return {'type': self._job_type} + + def complete_job(self): + self._job_type = VIR_DOMAIN_JOB_COMPLETED + + def fail_job(self): + self._job_type = VIR_DOMAIN_JOB_FAILED def injectNMI(self, flags=0): return 0 @@ -1762,6 +1773,16 @@ virNWFilter = NWFilter +# A private libvirt-python class and global only provided here for testing to +# ensure it's not returned by libvirt.host.Host.get_libvirt_proxy_classes. +class FakeHandler(object): + def __init__(self): + pass + + +_EventAddHandleFunc = FakeHandler + + class FakeLibvirtFixture(fixtures.Fixture): """Performs global setup/stubbing for all libvirt tests. """ diff -Nru nova-21.1.2/nova/tests/unit/virt/libvirt/test_host.py nova-21.2.0/nova/tests/unit/virt/libvirt/test_host.py --- nova-21.1.2/nova/tests/unit/virt/libvirt/test_host.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/virt/libvirt/test_host.py 2021-03-15 10:39:50.000000000 +0000 @@ -1322,8 +1322,9 @@ self.assertIn(fakelibvirt.virSecret, proxy_classes) self.assertIn(fakelibvirt.virNWFilter, proxy_classes) - # Assert that we filtered out libvirtError + # Assert that we filtered out libvirtError and any private classes self.assertNotIn(fakelibvirt.libvirtError, proxy_classes) + self.assertNotIn(fakelibvirt._EventAddHandleFunc, proxy_classes) def test_tpool_get_connection(self): # Test that Host.get_connection() returns a tpool.Proxy diff -Nru nova-21.1.2/nova/tests/unit/virt/test_virt_drivers.py nova-21.2.0/nova/tests/unit/virt/test_virt_drivers.py --- nova-21.1.2/nova/tests/unit/virt/test_virt_drivers.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/tests/unit/virt/test_virt_drivers.py 2021-03-15 10:39:50.000000000 +0000 @@ -39,6 +39,7 @@ from nova.tests.unit import fake_block_device from nova.tests.unit.image import fake as fake_image from nova.tests.unit import utils as test_utils +from nova.tests.unit.virt.libvirt import fakelibvirt from nova.virt import block_device as driver_block_device from nova.virt import event as virtevent from nova.virt import fake @@ -593,6 +594,10 @@ self.assertIn('username', console_pool) self.assertIn('password', console_pool) + @mock.patch( + 'nova.tests.unit.virt.libvirt.fakelibvirt.Domain.jobStats', + new=mock.Mock(return_value={ + 'type': fakelibvirt.VIR_DOMAIN_JOB_COMPLETED})) def test_live_migration(self): instance_ref, network_info = self._get_running_instance() fake_context = context.RequestContext('fake', 'fake') diff -Nru nova-21.1.2/nova/utils.py nova-21.2.0/nova/utils.py --- nova-21.1.2/nova/utils.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/utils.py 2021-03-15 10:39:50.000000000 +0000 @@ -1145,3 +1145,53 @@ norm_name = norm_name.upper() norm_name = orc.CUSTOM_NAMESPACE + norm_name return norm_name + + +def raise_if_old_compute(): + # to avoid circular imports + from nova import context as nova_context + from nova.objects import service + + ctxt = nova_context.get_admin_context() + + if CONF.api_database.connection is not None: + scope = 'system' + try: + current_service_version = service.get_minimum_version_all_cells( + ctxt, ['nova-compute']) + except exception.DBNotAllowed: + # This most likely means we are in a nova-compute service + # which is configured with a connection to the API database. + # We should not be attempting to "get out" of our cell to + # look at the minimum versions of nova-compute services in other + # cells, so DBNotAllowed was raised. Leave a warning message + # and fall back to only querying computes in our cell. + LOG.warning( + 'This service is configured for access to the API database ' + 'but is not allowed to directly access the database. You ' + 'should run this service without the ' + '[api_database]/connection config option. The service version ' + 'check will only query the local cell.') + scope = 'cell' + current_service_version = service.Service.get_minimum_version( + ctxt, 'nova-compute') + else: + scope = 'cell' + # We in a cell so target our query to the current cell only + current_service_version = service.Service.get_minimum_version( + ctxt, 'nova-compute') + + if current_service_version == 0: + # 0 means no compute in the system, + # probably a fresh install before the computes are registered + return + + oldest_supported_service_level = service.SERVICE_VERSION_ALIASES[ + service.OLDEST_SUPPORTED_SERVICE_VERSION] + + if current_service_version < oldest_supported_service_level: + raise exception.TooOldComputeService( + oldest_supported_version=service.OLDEST_SUPPORTED_SERVICE_VERSION, + scope=scope, + min_service_level=current_service_version, + oldest_supported_service=oldest_supported_service_level) diff -Nru nova-21.1.2/nova/virt/libvirt/host.py nova-21.2.0/nova/virt/libvirt/host.py --- nova-21.1.2/nova/virt/libvirt/host.py 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/nova/virt/libvirt/host.py 2021-03-15 10:39:50.000000000 +0000 @@ -124,15 +124,15 @@ @staticmethod def _get_libvirt_proxy_classes(libvirt_module): """Return a tuple for tpool.Proxy's autowrap argument containing all - classes defined by the libvirt module except libvirtError. + public vir* classes defined by the libvirt module. """ # Get a list of (name, class) tuples of libvirt classes classes = inspect.getmembers(libvirt_module, inspect.isclass) - # Return a list of just the classes, filtering out libvirtError because - # we don't need to proxy that - return tuple([cls[1] for cls in classes if cls[0] != 'libvirtError']) + # Return a list of just the vir* classes, filtering out libvirtError + # and any private globals pointing at private internal classes. + return tuple([cls[1] for cls in classes if cls[0].startswith("vir")]) def _wrap_libvirt_proxy(self, obj): """Return an object wrapped in a tpool.Proxy using autowrap appropriate diff -Nru nova-21.1.2/nova.egg-info/pbr.json nova-21.2.0/nova.egg-info/pbr.json --- nova-21.1.2/nova.egg-info/pbr.json 2021-02-04 11:29:09.000000000 +0000 +++ nova-21.2.0/nova.egg-info/pbr.json 2021-03-15 10:40:34.000000000 +0000 @@ -1 +1 @@ -{"git_version": "911066fe3c", "is_release": true} \ No newline at end of file +{"git_version": "8cbb35521a", "is_release": true} \ No newline at end of file diff -Nru nova-21.1.2/nova.egg-info/PKG-INFO nova-21.2.0/nova.egg-info/PKG-INFO --- nova-21.1.2/nova.egg-info/PKG-INFO 2021-02-04 11:29:09.000000000 +0000 +++ nova-21.2.0/nova.egg-info/PKG-INFO 2021-03-15 10:40:34.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 2.1 Name: nova -Version: 21.1.2 +Version: 21.2.0 Summary: Cloud computing fabric controller Home-page: https://docs.openstack.org/nova/latest/ Author: OpenStack diff -Nru nova-21.1.2/nova.egg-info/SOURCES.txt nova-21.2.0/nova.egg-info/SOURCES.txt --- nova-21.1.2/nova.egg-info/SOURCES.txt 2021-02-04 11:29:10.000000000 +0000 +++ nova-21.2.0/nova.egg-info/SOURCES.txt 2021-03-15 10:40:35.000000000 +0000 @@ -2821,9 +2821,13 @@ nova/tests/functional/regressions/test_bug_1852458.py nova/tests/functional/regressions/test_bug_1862633.py nova/tests/functional/regressions/test_bug_1879878.py +nova/tests/functional/regressions/test_bug_1888395.py nova/tests/functional/regressions/test_bug_1889108.py +nova/tests/functional/regressions/test_bug_1893284.py nova/tests/functional/regressions/test_bug_1894966.py nova/tests/functional/regressions/test_bug_1895696.py +nova/tests/functional/regressions/test_bug_1896463.py +nova/tests/functional/regressions/test_bug_1914777.py nova/tests/functional/wsgi/__init__.py nova/tests/functional/wsgi/test_flavor_manage.py nova/tests/functional/wsgi/test_interfaces.py @@ -4246,6 +4250,7 @@ releasenotes/notes/reset-marker-for-map_instances-0c841ef45e3adc7b.yaml releasenotes/notes/resize-api-cast-always-7eb1dbef8f7fe228.yaml releasenotes/notes/resource_providers_scheduler_db_filters-16b2ed3da00c51dd.yaml +releasenotes/notes/restore-rocky-portbinding-semantics-48e9b1fa969cc5e9.yaml releasenotes/notes/retrieve_physical_network_from_multi-segment-eec5a490c1ed8739.yaml releasenotes/notes/return-uuid-attribute-for-aggregates-70d9f733f86fb1a3.yaml releasenotes/notes/reworked-nova-manage-db-commands-b958b0a41a4004a6.yaml @@ -4363,6 +4368,7 @@ releasenotes/notes/vmware_limits-16edee7a9ad023bc.yaml releasenotes/notes/volume-attach-versioned-notifications-ef5afde3a5f6a749.yaml releasenotes/notes/vrouter-hw-offloads-38257f49ac1d3a60.yaml +releasenotes/notes/warn-when-services-started-with-old-compute-fc80b4ff58a2aaea.yaml releasenotes/notes/websocket-proxy-to-host-security-c3eca0647b0cbc02.yaml releasenotes/notes/workarounds-enable-consoleauth-71d68c3879dc2c8a.yaml releasenotes/notes/workarounds-libvirt-disable-native-luks-a4eccca8019db243.yaml diff -Nru nova-21.1.2/PKG-INFO nova-21.2.0/PKG-INFO --- nova-21.1.2/PKG-INFO 2021-02-04 11:29:11.682024500 +0000 +++ nova-21.2.0/PKG-INFO 2021-03-15 10:40:36.351992600 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 2.1 Name: nova -Version: 21.1.2 +Version: 21.2.0 Summary: Cloud computing fabric controller Home-page: https://docs.openstack.org/nova/latest/ Author: OpenStack diff -Nru nova-21.1.2/releasenotes/notes/cros-scell-resize-not-supported-with-ports-having-resource-request-a8e1029ef5983793.yaml nova-21.2.0/releasenotes/notes/cros-scell-resize-not-supported-with-ports-having-resource-request-a8e1029ef5983793.yaml --- nova-21.1.2/releasenotes/notes/cros-scell-resize-not-supported-with-ports-having-resource-request-a8e1029ef5983793.yaml 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/releasenotes/notes/cros-scell-resize-not-supported-with-ports-having-resource-request-a8e1029ef5983793.yaml 2021-03-15 10:39:50.000000000 +0000 @@ -4,4 +4,6 @@ When the tempest test coverage was added for resize and cold migrate with neutron ports having QoS minimum bandwidth policy rules we discovered that the cross cell resize code path cannot handle such ports. - See bug https://bugs.launchpad.net/nova/+bug/1907522 for details. + See bug https://bugs.launchpad.net/nova/+bug/1907522 for details. A fix + was implemented that makes sure that Nova falls back to same-cell resize if + the server has such ports. diff -Nru nova-21.1.2/releasenotes/notes/restore-rocky-portbinding-semantics-48e9b1fa969cc5e9.yaml nova-21.2.0/releasenotes/notes/restore-rocky-portbinding-semantics-48e9b1fa969cc5e9.yaml --- nova-21.1.2/releasenotes/notes/restore-rocky-portbinding-semantics-48e9b1fa969cc5e9.yaml 1970-01-01 00:00:00.000000000 +0000 +++ nova-21.2.0/releasenotes/notes/restore-rocky-portbinding-semantics-48e9b1fa969cc5e9.yaml 2021-03-15 10:39:50.000000000 +0000 @@ -0,0 +1,14 @@ +--- +fixes: + - | + In the Rocky (18.0.0) release support was added to nova to use neutron's + multiple port binding feature when the binding-extended API extension + is available. In the Train (20.0.0) release the SR-IOV live migration + feature broke the semantics of the vifs field in the ``migration_data`` + object that signals if the new multiple port binding workflow should + be used by always populating it even when the ``binding-extended`` API + extension is not present. This broke live migration for any deployment + that did not support the optional ``binding-extended`` API extension. + The Rocky behavior has now been restored enabling live migration + using the single port binding workflow when multiple port bindings + are not available. diff -Nru nova-21.1.2/releasenotes/notes/warn-when-services-started-with-old-compute-fc80b4ff58a2aaea.yaml nova-21.2.0/releasenotes/notes/warn-when-services-started-with-old-compute-fc80b4ff58a2aaea.yaml --- nova-21.1.2/releasenotes/notes/warn-when-services-started-with-old-compute-fc80b4ff58a2aaea.yaml 1970-01-01 00:00:00.000000000 +0000 +++ nova-21.2.0/releasenotes/notes/warn-when-services-started-with-old-compute-fc80b4ff58a2aaea.yaml 2021-03-15 10:39:50.000000000 +0000 @@ -0,0 +1,9 @@ +--- +upgrade: + - | + Nova services only support old computes if the compute is not + older than the previous major nova release. From now on nova services will + emit a warning at startup if the deployment contains too old compute + services. From the 23.0.0 (Wallaby) release nova services will refuse to + start if the deployment contains too old compute services to prevent + compatibility issues. diff -Nru nova-21.1.2/tools/check-cherry-picks.sh nova-21.2.0/tools/check-cherry-picks.sh --- nova-21.1.2/tools/check-cherry-picks.sh 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/tools/check-cherry-picks.sh 2021-03-15 10:39:50.000000000 +0000 @@ -4,6 +4,11 @@ # to verify that they're all on either master or stable/ branches # +# Allow this script to be disabled by a simple env var +if [ ${DISABLE_CHERRY_PICK_CHECK:-0} -eq 1 ]; then + exit 0 +fi + commit_hash="" # Check if the patch is a merge patch by counting the number of parents. diff -Nru nova-21.1.2/tox.ini nova-21.2.0/tox.ini --- nova-21.1.2/tox.ini 2021-02-04 11:28:30.000000000 +0000 +++ nova-21.2.0/tox.ini 2021-03-15 10:39:50.000000000 +0000 @@ -42,6 +42,8 @@ description = Run style checks. envdir = {toxworkdir}/shared +passenv = + DISABLE_CHERRY_PICK_CHECK commands = bash tools/flake8wrap.sh {posargs} # Check that all JSON files don't have \r\n in line.