diff -Nru nova-16.1.0/api-ref/source/os-volume-attachments.inc nova-16.1.2/api-ref/source/os-volume-attachments.inc --- nova-16.1.0/api-ref/source/os-volume-attachments.inc 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/api-ref/source/os-volume-attachments.inc 2018-04-25 09:22:33.000000000 +0000 @@ -137,6 +137,9 @@ Update a volume attachment. +.. note:: This action only valid when the server is in ACTIVE, PAUSED and RESIZED state, + or a conflict(409) error will be returned. + Normal response codes: 202 Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404), conflict(409) diff -Nru nova-16.1.0/AUTHORS nova-16.1.2/AUTHORS --- nova-16.1.0/AUTHORS 2018-02-15 23:58:24.000000000 +0000 +++ nova-16.1.2/AUTHORS 2018-04-25 09:25:38.000000000 +0000 @@ -68,6 +68,7 @@ Allen Gao Alvaro Lopez Garcia Amandeep +Ameed Ashour Ameed Ashour Amir Sadoughi Amrith Kumar @@ -385,6 +386,7 @@ Eric Guo Eric Harney Eric Harney +Eric M Gonzalez Eric Windisch Eric Windisch Erik Berg @@ -744,6 +746,7 @@ Mario Villaplana Maris Fogels Mark Doffman +Mark Giles Mark Goddard Mark McClain Mark McLoughlin @@ -787,6 +790,7 @@ Md Nadeem Mehdi Abaakouk Mehdi Abaakouk +Mehdi Abaakouk Melanie Witt Michael Bayer Michael Davies @@ -1046,6 +1050,7 @@ Shih-Hao Li Shilla Saebi Shlomi Sasson +Shoham Peller Shraddha Pandhe Shraddha Pandhe Shuangtai Tian @@ -1199,6 +1204,7 @@ Waldemar Znoinski Walter A. Boring IV Wangpan +Wangpan Wanlong Gao Wei Jiangang Wen Zhi Yu diff -Nru nova-16.1.0/ChangeLog nova-16.1.2/ChangeLog --- nova-16.1.0/ChangeLog 2018-02-15 23:58:22.000000000 +0000 +++ nova-16.1.2/ChangeLog 2018-04-25 09:25:35.000000000 +0000 @@ -1,9 +1,58 @@ CHANGES ======= +16.1.2 +------ + +* libvirt: disconnect volume from host during detach +* only increment disk address unit for scsi devices +* libvirt: Report the allocated size of preallocated file based disks +* libvirt: Block swap volume attempts with encrypted volumes prior to Queens +* ironic: Get correct inventory for deployed node +* Don't persist RequestSpec.retry +* Add regression test for persisted RequestSpec.retry from failed resize +* libvirt: Allow to specify granular CPU feature flags +* Fix wrapping of neutron forbidden error + +16.1.1 +------ + +* Only attempt a rebuild claim for an evacuation to a new host +* add check before adding cpus to cpuset\_reserved +* Allow force-delete even if task\_state is not None +* Move \_make\_instance\_list call outside of DB transaction context +* Add functional regression test for bug 1746509 +* Always deallocate networking before reschedule if using Neutron +* Add --by-service to discover\_hosts +* Revert "Refine waiting for vif plug events during \_hard\_reboot" +* docs: Disable smartquotes +* Avoid exploding if guest refuses to detach a volume +* Return 400 when compute host is not found +* Save admin password to sysmeta in libvirt driver +* Remove osprofiler tests +* compute: Cleans up allocations after failed resize +* Handle spawning error on unshelving +* libvirt: mask InjectionInfo.admin\_pass +* Ensure attachment\_id always exists for block device mapping +* Add functional test for deleting BFV server with old attach flow +* Clean up ports and volumes when deleting ERROR instance +* Add functional tests to ensure BDM removal on delete +* Store block device mappings in cell0 +* Drop extra loop which modifies Cinder volume status +* Lazy-load instance attributes with read\_deleted=yes +* unquiesce instance on volume snapshot failure +* Fix SUSE Install Guide: Placement port +* Detach volumes when VM creation fails + 16.1.0 ------ +* Fix docs for IsolatedHostsFilter +* Handle volume-backed instances in IsolatedHostsFilter +* Add regression test for BFV+IsolatedHostsFilter failure +* Do not attempt volume swap when guest is stopped/suspended +* doc: fix the link for the evacuate cli +* Refine waiting for vif plug events during \_hard\_reboot * Add release note for Aggregate[Core|Ram|Disk]Filter change * Fix wrong link for "Manage Flavors" in CPU topologies doc * live-mig: keep disk device address same @@ -14,8 +63,10 @@ * Don't wait for vif plug events during \_hard\_reboot * Migrate "launch instance" user guide docs * doc: Add user index page +* Don't launch guestfs in a thread pool if guestfs.debug is enabled * Bumping functional test job timeouts * Stop globally caching host states in scheduler HostManager +* Rollback instance.image\_ref on failed rebuild * Handle images with no data * tests: Use correct response type in tests * Make sure that functional test triggered on sample changes @@ -25,11 +76,14 @@ * Fixes 'Not enough available memory' log message * Fix format in live-migration-usage.rst * Add 'delete\_host' command in 'nova-manage cell\_v2' +* Handle glance exception during rotating instance backup * libvirt: Re-initialise volumes, encryptors, and vifs on hard reboot * libvirt: use 'host-passthrough' as default on AArch64 +* Fix possible TypeError in VIF.fixed\_ips * Use UEFI as the default boot for AArch64 * doc: Add configuration index page * Raise MarkerNotFound if BuildRequestList.get\_by\_filters doesn't find marker +* Do not set allocation.id in AllocationList.create\_all() * Don't try to delete build request during a reschedule * Fix an error in \_get\_host\_states when deleting a compute node * Add missing unit tests for FilterScheduler.\_get\_all\_host\_states @@ -44,9 +98,11 @@ 16.0.4 ------ +* Re-use existing ComputeNode on ironic rebalance * Only log not correcting allocation once per period * Fix NoneType error when [service\_user] is misconfigured * Fix 'force' parameter in os-quota-sets PUT schema +* [placement] re-use existing conf with auth token middleware * [placement] Fix foreign key constraint error * Fix doubling allocations on rebuild * Add regression test for rebuild with new image doubling allocations @@ -57,6 +113,7 @@ * Fix ValueError if invalid max\_rows passed to db purge * Mention API behavior change when over quota limit * Downgrade log for keystone verify client fail +* Proper error handling by \_ensure\_resource\_provider * Vzstorage: synchronize volume connect * Fix TypeError of \_get\_project\_id when project\_id is None * Fix incorrect known vcpuset when CPUPinningUnknown raised @@ -26298,8 +26355,8 @@ * Casting to the scheduler * moves driver.init\_host into the base class so it happens before floating forwards and sets up proper iptables chains -2011.1rc1 ---------- +2011.1 +------ * Set FINAL = True in version.py * Open Cactus development @@ -28069,6 +28126,10 @@ * Another pep8 cleanup branch for nova/tests, should be merged after lp:~eday/nova/pep8-fixes-other. After this, the pep8 violation count is 0! * Changes block size for dd to a reasonable number * Another pep8 cleanup branch for nova/api, should be merged after lp:~eday/nova/pep8-fixes + +2010.1 +------ + * Created Authors file * Actually adding Authors file * Created Authors file and added to manifest for Austin Release diff -Nru nova-16.1.0/debian/changelog nova-16.1.2/debian/changelog --- nova-16.1.0/debian/changelog 2018-03-22 12:44:37.000000000 +0000 +++ nova-16.1.2/debian/changelog 2018-05-01 13:41:19.000000000 +0000 @@ -1,3 +1,10 @@ +nova (2:16.1.2-0ubuntu1) artful; urgency=medium + + * New stable point release for OpenStack Pike (LP: #1763320). + * d/p/*: Rebased. + + -- Corey Bryant Tue, 01 May 2018 09:41:19 -0400 + nova (2:16.1.0-0ubuntu1) artful; urgency=medium [ James Page ] diff -Nru nova-16.1.0/debian/patches/aarch64-libvirt-compat.patch nova-16.1.2/debian/patches/aarch64-libvirt-compat.patch --- nova-16.1.0/debian/patches/aarch64-libvirt-compat.patch 2018-03-22 12:44:37.000000000 +0000 +++ nova-16.1.2/debian/patches/aarch64-libvirt-compat.patch 2018-05-01 13:41:19.000000000 +0000 @@ -9,7 +9,7 @@ --- a/nova/virt/libvirt/driver.py +++ b/nova/virt/libvirt/driver.py -@@ -4658,6 +4658,13 @@ +@@ -4770,6 +4770,13 @@ self._is_s390x_guest(image_meta)): self._create_consoles_s390x(guest_cfg, instance, flavor, image_meta) @@ -23,7 +23,7 @@ elif virt_type in ("qemu", "kvm"): self._create_consoles_qemu_kvm(guest_cfg, instance, flavor, image_meta) -@@ -4666,6 +4673,12 @@ +@@ -4778,6 +4785,12 @@ s390x_archs = (fields.Architecture.S390, fields.Architecture.S390X) return libvirt_utils.get_arch(image_meta) in s390x_archs @@ -36,7 +36,7 @@ def _create_consoles_qemu_kvm(self, guest_cfg, instance, flavor, image_meta): char_dev_cls = vconfig.LibvirtConfigGuestSerial -@@ -4695,6 +4708,25 @@ +@@ -4807,6 +4820,25 @@ "sclplm") self._create_pty_device(guest_cfg, char_dev_cls, "sclp", log_path) @@ -62,7 +62,7 @@ def _create_pty_device(self, guest_cfg, char_dev_cls, target_type=None, log_path=None): def _create_base_dev(): -@@ -4732,8 +4764,8 @@ +@@ -4844,8 +4876,8 @@ guest_cfg.add_device(_create_base_dev()) def _create_file_device(self, guest_cfg, instance, char_dev_cls, @@ -75,7 +75,7 @@ consolelog = char_dev_cls() --- a/nova/tests/unit/virt/libvirt/test_driver.py +++ b/nova/tests/unit/virt/libvirt/test_driver.py -@@ -3839,6 +3839,7 @@ +@@ -4026,6 +4026,7 @@ return_value=4) @mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch', side_effect=[fields.Architecture.X86_64, @@ -83,7 +83,7 @@ fields.Architecture.S390, fields.Architecture.S390X]) def test_create_serial_console_devices_with_limit_exceeded_based_on_arch( -@@ -4268,6 +4269,31 @@ +@@ -4456,6 +4457,31 @@ self.assertEqual("pty", terminal_device.type) self.assertEqual("s390-ccw-virtio", cfg.os_mach_type) diff -Nru nova-16.1.0/debian/patches/arm-console-patch.patch nova-16.1.2/debian/patches/arm-console-patch.patch --- nova-16.1.0/debian/patches/arm-console-patch.patch 2018-03-22 12:44:37.000000000 +0000 +++ nova-16.1.2/debian/patches/arm-console-patch.patch 2018-05-01 13:41:19.000000000 +0000 @@ -1,6 +1,6 @@ --- a/nova/tests/unit/virt/libvirt/test_driver.py +++ b/nova/tests/unit/virt/libvirt/test_driver.py -@@ -2077,7 +2077,7 @@ +@@ -2113,7 +2113,7 @@ self.assertEqual(instance_ref.flavor.vcpus, cfg.vcpus) self.assertEqual(fields.VMMode.EXE, cfg.os_type) self.assertEqual("/sbin/init", cfg.os_init_path) @@ -9,7 +9,7 @@ cfg.os_cmdline) self.assertIsNone(cfg.os_root) self.assertEqual(3, len(cfg.devices)) -@@ -2103,7 +2103,7 @@ +@@ -2139,7 +2139,7 @@ self.assertEqual(instance_ref.vcpus, cfg.vcpus) self.assertEqual(fields.VMMode.EXE, cfg.os_type) self.assertEqual("/sbin/init", cfg.os_init_path) @@ -20,7 +20,7 @@ self.assertEqual(3, len(cfg.devices)) --- a/nova/virt/libvirt/driver.py +++ b/nova/virt/libvirt/driver.py -@@ -136,7 +136,7 @@ +@@ -139,7 +139,7 @@ DISABLE_REASON_UNDEFINED = None # Guest config console string diff -Nru nova-16.1.0/doc/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json nova-16.1.2/doc/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json --- nova-16.1.0/doc/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/doc/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json 2018-04-25 09:22:33.000000000 +0000 @@ -22,7 +22,7 @@ "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, - "hypervisor_hostname": "fake-mini", + "hypervisor_hostname": "host1", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": 2, diff -Nru nova-16.1.0/doc/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json nova-16.1.2/doc/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json --- nova-16.1.0/doc/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/doc/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json 2018-04-25 09:22:24.000000000 +0000 @@ -1,7 +1,7 @@ { "hypervisors": [ { - "hypervisor_hostname": "fake-mini", + "hypervisor_hostname": "host1", "id": 2, "state": "up", "status": "enabled" diff -Nru nova-16.1.0/doc/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json nova-16.1.2/doc/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json --- nova-16.1.0/doc/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/doc/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json 2018-04-25 09:22:33.000000000 +0000 @@ -22,7 +22,7 @@ "host_ip": "1.1.1.1", "free_disk_gb": 1028, "free_ram_mb": 7680, - "hypervisor_hostname": "fake-mini", + "hypervisor_hostname": "host2", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "1bb62a04-c576-402c-8147-9e89757a09e3", diff -Nru nova-16.1.0/doc/source/admin/configuration/schedulers.rst nova-16.1.2/doc/source/admin/configuration/schedulers.rst --- nova-16.1.0/doc/source/admin/configuration/schedulers.rst 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/doc/source/admin/configuration/schedulers.rst 2018-04-25 09:22:33.000000000 +0000 @@ -519,6 +519,7 @@ .. code-block:: ini + [filter_scheduler] isolated_hosts = server1, server2 isolated_images = 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09 diff -Nru nova-16.1.0/doc/source/admin/node-down.rst nova-16.1.2/doc/source/admin/node-down.rst --- nova-16.1.0/doc/source/admin/node-down.rst 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/doc/source/admin/node-down.rst 2018-04-25 09:22:33.000000000 +0000 @@ -10,8 +10,7 @@ If a hardware malfunction or other error causes the cloud compute node to fail, you can use the :command:`nova evacuate` command to evacuate instances. See -the `OpenStack Administrator Guide -`__. +:doc:`evacuate instances ` for more information on using the command. .. _nova-compute-node-down-manual-recovery: diff -Nru nova-16.1.0/doc/source/cli/nova-manage.rst nova-16.1.2/doc/source/cli/nova-manage.rst --- nova-16.1.0/doc/source/cli/nova-manage.rst 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/doc/source/cli/nova-manage.rst 2018-04-25 09:22:33.000000000 +0000 @@ -171,17 +171,21 @@ transport url or database connection was missing, and 2 if a cell is already using that transport url and database connection combination. -``nova-manage cell_v2 discover_hosts [--cell_uuid ] [--verbose] [--strict]`` +``nova-manage cell_v2 discover_hosts [--cell_uuid ] [--verbose] [--strict] [--by-service]`` Searches cells, or a single cell, and maps found hosts. This command will - check the database for each cell (or a single one if passed in) and map - any hosts which are not currently mapped. If a host is already mapped - nothing will be done. You need to re-run this command each time you add - more compute hosts to a cell (otherwise the scheduler will never place - instances there and the API will not list the new hosts). If the strict - option is provided the command will only be considered successful if an - unmapped host is discovered (exit code 0). Any other case is considered a - failure (exit code 1). + check the database for each cell (or a single one if passed in) and map any + hosts which are not currently mapped. If a host is already mapped nothing + will be done. You need to re-run this command each time you add more + compute hosts to a cell (otherwise the scheduler will never place instances + there and the API will not list the new hosts). If the strict option is + provided the command will only be considered successful if an unmapped host + is discovered (exit code 0). Any other case is considered a failure (exit + code 1). If --by-service is specified, this command will look in the + appropriate cell(s) for any nova-compute services and ensure there are host + mappings for them. This is less efficient and is only necessary when using + compute drivers that may manage zero or more actual compute nodes at any + given time (currently only ironic). ``nova-manage cell_v2 list_cells [--verbose]`` diff -Nru nova-16.1.0/doc/source/conf.py nova-16.1.2/doc/source/conf.py --- nova-16.1.0/doc/source/conf.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/doc/source/conf.py 2018-04-25 09:22:33.000000000 +0000 @@ -150,6 +150,10 @@ # using the given strftime format. html_last_updated_fmt = '%Y-%m-%d %H:%M' +# Disable smartquotes to ensure all quoted example config options can be copied +# from the docs without later causing unicode errors within Nova. +html_use_smartypants = False + # -- Options for LaTeX output ------------------------------------------------- # Grouping the document tree into LaTeX files. List of tuples diff -Nru nova-16.1.0/doc/source/install/controller-install-obs.rst nova-16.1.2/doc/source/install/controller-install-obs.rst --- nova-16.1.0/doc/source/install/controller-install-obs.rst 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/doc/source/install/controller-install-obs.rst 2018-04-25 09:22:33.000000000 +0000 @@ -207,7 +207,7 @@ .. code-block:: console - $ openstack endpoint create --region RegionOne placement public http://controller:8778 + $ openstack endpoint create --region RegionOne placement public http://controller:8780 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ @@ -219,10 +219,10 @@ | service_id | 2d1a27022e6e4185b86adac4444c495f | | service_name | placement | | service_type | placement | - | url | http://controller:8778 | + | url | http://controller:8780 | +--------------+----------------------------------+ - $ openstack endpoint create --region RegionOne placement internal http://controller:8778 + $ openstack endpoint create --region RegionOne placement internal http://controller:8780 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ @@ -234,10 +234,10 @@ | service_id | 2d1a27022e6e4185b86adac4444c495f | | service_name | placement | | service_type | placement | - | url | http://controller:8778 | + | url | http://controller:8780 | +--------------+----------------------------------+ - $ openstack endpoint create --region RegionOne placement admin http://controller:8778 + $ openstack endpoint create --region RegionOne placement admin http://controller:8780 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ @@ -249,7 +249,7 @@ | service_id | 2d1a27022e6e4185b86adac4444c495f | | service_name | placement | | service_type | placement | - | url | http://controller:8778 | + | url | http://controller:8780 | +--------------+----------------------------------+ Install and configure components diff -Nru nova-16.1.0/doc/source/user/filter-scheduler.rst nova-16.1.2/doc/source/user/filter-scheduler.rst --- nova-16.1.0/doc/source/user/filter-scheduler.rst 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/doc/source/user/filter-scheduler.rst 2018-04-25 09:22:33.000000000 +0000 @@ -102,7 +102,7 @@ fall back to the global default ``cpu_allocation_ratio``. If more than one value is found for a host (meaning the host is in two different aggregates with different ratio settings), the minimum value will be used. -* |IsolatedHostsFilter| - filter based on ``image_isolated``, ``host_isolated`` +* |IsolatedHostsFilter| - filter based on ``isolated_images``, ``isolated_hosts`` and ``restrict_isolated_hosts_to_isolated_images`` flags. * |JsonFilter| - allows simple JSON-based grammar for selecting hosts. * |RamFilter| - filters hosts by their RAM. Only hosts with sufficient RAM @@ -255,9 +255,9 @@ Now we are going to |IsolatedHostsFilter|. There can be some special hosts reserved for specific images. These hosts are called **isolated**. So the images to run on the isolated hosts are also called isolated. The filter -checks if ``image_isolated`` flag named in instance specifications is the same -as the host. Isolated hosts can run non isolated images if the flag -``restrict_isolated_hosts_to_isolated_images`` is set to false. +checks if ``isolated_images`` flag named in instance specifications is the same +as the host specified in ``isolated_hosts``. Isolated hosts can run non-isolated +images if the flag ``restrict_isolated_hosts_to_isolated_images`` is set to false. |DifferentHostFilter| - method ``host_passes`` returns ``True`` if the host to place an instance on is different from all the hosts used by a set of instances. diff -Nru nova-16.1.0/nova/api/openstack/compute/migrate_server.py nova-16.1.2/nova/api/openstack/compute/migrate_server.py --- nova-16.1.0/nova/api/openstack/compute/migrate_server.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/api/openstack/compute/migrate_server.py 2018-04-25 09:22:33.000000000 +0000 @@ -100,7 +100,6 @@ raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.NoValidHost, exception.ComputeServiceUnavailable, - exception.ComputeHostNotFound, exception.InvalidHypervisorType, exception.InvalidCPUInfo, exception.UnableToMigrateToSelf, @@ -119,6 +118,8 @@ raise exc.HTTPBadRequest(explanation=ex.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) + except exception.ComputeHostNotFound as e: + raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'os-migrateLive', id) diff -Nru nova-16.1.0/nova/api/openstack/placement/deploy.py nova-16.1.2/nova/api/openstack/placement/deploy.py --- nova-16.1.0/nova/api/openstack/placement/deploy.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/api/openstack/placement/deploy.py 2018-04-25 09:22:33.000000000 +0000 @@ -38,9 +38,11 @@ if conf.api.auth_strategy == 'noauth2': auth_middleware = auth.NoAuthMiddleware else: - # Do not provide global conf to middleware here. + # Do not use 'oslo_config_project' param here as the conf + # location may have been overridden earlier in the deployment + # process with OS_PLACEMENT_CONFIG_DIR in wsgi.py. auth_middleware = auth_token.filter_factory( - {}, oslo_config_project=project_name) + {}, oslo_config_config=conf) # Pass in our CORS config, if any, manually as that's a) # explicit, b) makes testing more straightfoward, c) let's diff -Nru nova-16.1.0/nova/cmd/manage.py nova-16.1.2/nova/cmd/manage.py --- nova-16.1.0/nova/cmd/manage.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/cmd/manage.py 2018-04-25 09:22:33.000000000 +0000 @@ -1567,7 +1567,11 @@ help=_('Considered successful (exit code 0) only when an unmapped ' 'host is discovered. Any other outcome will be considered a ' 'failure (exit code 1).')) - def discover_hosts(self, cell_uuid=None, verbose=False, strict=False): + @args('--by-service', action='store_true', default=False, + dest='by_service', + help=_('Discover hosts by service instead of compute node')) + def discover_hosts(self, cell_uuid=None, verbose=False, strict=False, + by_service=False): """Searches cells, or a single cell, and maps found hosts. When a new host is added to a deployment it will add a service entry @@ -1580,7 +1584,8 @@ print(msg) ctxt = context.RequestContext() - hosts = host_mapping_obj.discover_hosts(ctxt, cell_uuid, status_fn) + hosts = host_mapping_obj.discover_hosts(ctxt, cell_uuid, status_fn, + by_service) # discover_hosts will return an empty list if no hosts are discovered if strict: return int(not hosts) diff -Nru nova-16.1.0/nova/compute/api.py nova-16.1.2/nova/compute/api.py --- nova-16.1.0/nova/compute/api.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/compute/api.py 2018-04-25 09:22:33.000000000 +0000 @@ -1352,6 +1352,13 @@ volume = self._check_attach(context, volume_id, instance) bdm.volume_size = volume.get('size') + + # NOTE(mnaser): If we end up reserving the volume, it will + # not have an attachment_id which is needed + # for cleanups. This can be removed once + # all calls to reserve_volume are gone. + if 'attachment_id' not in bdm: + bdm.attachment_id = None except (exception.CinderConnectionFailed, exception.InvalidVolume): raise @@ -1780,6 +1787,11 @@ # instance is now in a cell and the delete needs to proceed # normally. return False + + # We need to detach from any volumes so they aren't orphaned. + self._local_cleanup_bdm_volumes( + build_req.block_device_mappings, instance, context) + return True def _delete(self, context, instance, delete_type, cb, **instance_attrs): @@ -1788,12 +1800,12 @@ return cell = None - # If there is an instance.host (or the instance is shelved-offloaded), - # the instance has been scheduled and sent to a cell/compute which - # means it was pulled from the cell db. + # If there is an instance.host (or the instance is shelved-offloaded or + # in error state), the instance has been scheduled and sent to a + # cell/compute which means it was pulled from the cell db. # Normal delete should be attempted. - if not (instance.host or - instance.vm_state == vm_states.SHELVED_OFFLOADED): + may_have_ports_or_volumes = self._may_have_ports_or_volumes(instance) + if not instance.host and not may_have_ports_or_volumes: try: if self._delete_while_booting(context, instance): return @@ -1872,9 +1884,7 @@ # which will cause a cast to the child cell. cb(context, instance, bdms) return - shelved_offloaded = (instance.vm_state - == vm_states.SHELVED_OFFLOADED) - if not instance.host and not shelved_offloaded: + if not instance.host and not may_have_ports_or_volumes: try: compute_utils.notify_about_instance_usage( self.notifier, context, instance, @@ -1889,7 +1899,12 @@ {'state': instance.vm_state}, instance=instance) return - except exception.ObjectActionError: + except exception.ObjectActionError as ex: + # The instance's host likely changed under us as + # this instance could be building and has since been + # scheduled. Continue with attempts to delete it. + LOG.debug('Refreshing instance because: %s', ex, + instance=instance) instance.refresh() if instance.vm_state == vm_states.RESIZED: @@ -1897,7 +1912,8 @@ is_local_delete = True try: - if not shelved_offloaded: + # instance.host must be set in order to look up the service. + if instance.host is not None: service = objects.Service.get_by_compute_host( context.elevated(), instance.host) is_local_delete = not self.servicegroup_api.service_is_up( @@ -1914,7 +1930,9 @@ cb(context, instance, bdms) except exception.ComputeHostNotFound: - pass + LOG.debug('Compute host %s not found during service up check, ' + 'going to local delete instance', instance.host, + instance=instance) if is_local_delete: # If instance is in shelved_offloaded state or compute node @@ -1941,6 +1959,16 @@ # NOTE(comstud): Race condition. Instance already gone. pass + def _may_have_ports_or_volumes(self, instance): + # NOTE(melwitt): When an instance build fails in the compute manager, + # the instance host and node are set to None and the vm_state is set + # to ERROR. In the case, the instance with host = None has actually + # been scheduled and may have ports and/or volumes allocated on the + # compute node. + if instance.vm_state in (vm_states.SHELVED_OFFLOADED, vm_states.ERROR): + return True + return False + def _confirm_resize_on_deleting(self, context, instance): # If in the middle of a resize, use confirm_resize to # ensure the original instance is cleaned up too @@ -1996,6 +2024,14 @@ 'the instance host %(instance_host)s.', {'connector_host': connector.get('host'), 'instance_host': instance.host}, instance=instance) + if (instance.host is None and + self._may_have_ports_or_volumes(instance)): + LOG.debug('Allowing use of stashed volume connector with ' + 'instance host None because instance with ' + 'vm_state %(vm_state)s has been scheduled in ' + 'the past.', {'vm_state': instance.vm_state}, + instance=instance) + return connector def _local_cleanup_bdm_volumes(self, bdms, instance, context): """The method deletes the bdm records and, if a bdm is a volume, call @@ -2030,7 +2066,12 @@ except Exception as exc: LOG.warning("Ignoring volume cleanup failure due to %s", exc, instance=instance) - bdm.destroy() + # If we're cleaning up volumes from an instance that wasn't yet + # created in a cell, i.e. the user deleted the server while + # the BuildRequest still existed, then the BDM doesn't actually + # exist in the DB to destroy it. + if 'id' in bdm: + bdm.destroy() def _local_delete(self, context, instance, bdms, delete_type, cb): if instance.vm_state == vm_states.SHELVED_OFFLOADED: @@ -2157,7 +2198,8 @@ instance.save(expected_task_state=[None]) @check_instance_lock - @check_instance_state(must_have_launched=False) + @check_instance_state(task_state=None, + must_have_launched=False) def force_delete(self, context, instance): """Force delete an instance in any vm_state/task_state.""" self._delete(context, instance, 'force_delete', self._do_force_delete, @@ -2821,6 +2863,8 @@ quiesced = False if instance.vm_state == vm_states.ACTIVE: try: + LOG.info("Attempting to quiesce instance before volume " + "snapshot.", instance=instance) self.compute_rpcapi.quiesce_instance(context, instance) quiesced = True except (exception.InstanceQuiesceNotSupported, @@ -2838,28 +2882,43 @@ context, instance.uuid) mapping = [] - for bdm in bdms: - if bdm.no_device: - continue + try: + for bdm in bdms: + if bdm.no_device: + continue - if bdm.is_volume: - # create snapshot based on volume_id - volume = self.volume_api.get(context, bdm.volume_id) - # NOTE(yamahata): Should we wait for snapshot creation? - # Linux LVM snapshot creation completes in - # short time, it doesn't matter for now. - name = _('snapshot for %s') % image_meta['name'] - LOG.debug('Creating snapshot from volume %s.', volume['id'], - instance=instance) - snapshot = self.volume_api.create_snapshot_force( - context, volume['id'], name, volume['display_description']) - mapping_dict = block_device.snapshot_from_bdm(snapshot['id'], - bdm) - mapping_dict = mapping_dict.get_image_mapping() - else: - mapping_dict = bdm.get_image_mapping() + if bdm.is_volume: + # create snapshot based on volume_id + volume = self.volume_api.get(context, bdm.volume_id) + # NOTE(yamahata): Should we wait for snapshot creation? + # Linux LVM snapshot creation completes in short time, + # it doesn't matter for now. + name = _('snapshot for %s') % image_meta['name'] + LOG.debug('Creating snapshot from volume %s.', + volume['id'], instance=instance) + snapshot = self.volume_api.create_snapshot_force( + context, volume['id'], + name, volume['display_description']) + mapping_dict = block_device.snapshot_from_bdm( + snapshot['id'], bdm) + mapping_dict = mapping_dict.get_image_mapping() + else: + mapping_dict = bdm.get_image_mapping() - mapping.append(mapping_dict) + mapping.append(mapping_dict) + # NOTE(tasker): No error handling is done in the above for loop. + # This means that if the snapshot fails and throws an exception + # the traceback will skip right over the unquiesce needed below. + # Here, catch any exception, unquiesce the instance, and raise the + # error so that the calling function can do what it needs to in + # order to properly treat a failed snap. + except Exception: + with excutils.save_and_reraise_exception(): + if quiesced: + LOG.info("Unquiescing instance after volume snapshot " + "failure.", instance=instance) + self.compute_rpcapi.unquiesce_instance( + context, instance, mapping) if quiesced: self.compute_rpcapi.unquiesce_instance(context, instance, mapping) @@ -3763,8 +3822,7 @@ @check_instance_lock @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED, - vm_states.SUSPENDED, vm_states.STOPPED, - vm_states.RESIZED, vm_states.SOFT_DELETED]) + vm_states.RESIZED]) def swap_volume(self, context, instance, old_volume, new_volume): """Swap volume attached to an instance.""" # The caller likely got the instance from volume['attachments'] diff -Nru nova-16.1.0/nova/compute/manager.py nova-16.1.2/nova/compute/manager.py --- nova-16.1.0/nova/compute/manager.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/compute/manager.py 2018-04-25 09:22:33.000000000 +0000 @@ -1864,7 +1864,7 @@ instance=instance) self._cleanup_allocated_networks(context, instance, requested_networks) - self._cleanup_volumes(context, instance.uuid, + self._cleanup_volumes(context, instance, block_device_mapping, raise_exc=False) compute_utils.add_instance_fault_from_exc(context, instance, e, sys.exc_info(), @@ -1880,12 +1880,21 @@ retry['exc_reason'] = e.kwargs['reason'] # NOTE(comstud): Deallocate networks if the driver wants # us to do so. + # NOTE(mriedem): Always deallocate networking when using Neutron. + # This is to unbind any ports that the user supplied in the server + # create request, or delete any ports that nova created which were + # meant to be bound to this host. This check intentionally bypasses + # the result of deallocate_networks_on_reschedule because the + # default value in the driver is False, but that method was really + # only meant for Ironic and should be removed when nova-network is + # removed (since is_neutron() will then always be True). # NOTE(vladikr): SR-IOV ports should be deallocated to # allow new sriov pci devices to be allocated on a new host. # Otherwise, if devices with pci addresses are already allocated # on the destination host, the instance will fail to spawn. # info_cache.network_info should be present at this stage. if (self.driver.deallocate_networks_on_reschedule(instance) or + utils.is_neutron() or self.deallocate_sriov_ports_on_reschedule(instance)): self._cleanup_allocated_networks(context, instance, requested_networks) @@ -1917,7 +1926,7 @@ LOG.exception(e.format_message(), instance=instance) self._cleanup_allocated_networks(context, instance, requested_networks) - self._cleanup_volumes(context, instance.uuid, + self._cleanup_volumes(context, instance, block_device_mapping, raise_exc=False) compute_utils.add_instance_fault_from_exc(context, instance, e, sys.exc_info()) @@ -1931,7 +1940,7 @@ instance=instance) self._cleanup_allocated_networks(context, instance, requested_networks) - self._cleanup_volumes(context, instance.uuid, + self._cleanup_volumes(context, instance, block_device_mapping, raise_exc=False) compute_utils.add_instance_fault_from_exc(context, instance, e, sys.exc_info()) @@ -2377,14 +2386,27 @@ self.host, action=fields.NotificationAction.SHUTDOWN, phase=fields.NotificationPhase.END) - def _cleanup_volumes(self, context, instance_uuid, bdms, raise_exc=True): + def _cleanup_volumes(self, context, instance, bdms, raise_exc=True, + detach=True): exc_info = None - for bdm in bdms: - LOG.debug("terminating bdm %s", bdm, - instance_uuid=instance_uuid) + if detach and bdm.volume_id: + try: + LOG.debug("Detaching volume: %s", bdm.volume_id, + instance_uuid=instance.uuid) + destroy = bdm.delete_on_termination + self._detach_volume(context, bdm, instance, + destroy_bdm=destroy) + except Exception as exc: + exc_info = sys.exc_info() + LOG.warning('Failed to detach volume: %(volume_id)s ' + 'due to %(exc)s', + {'volume_id': bdm.volume_id, 'exc': exc}) + if bdm.volume_id and bdm.delete_on_termination: try: + LOG.debug("Deleting volume: %s", bdm.volume_id, + instance_uuid=instance.uuid) self.volume_api.delete(context, bdm.volume_id) except Exception as exc: exc_info = sys.exc_info() @@ -2436,8 +2458,14 @@ # future to set an instance fault the first time # and to only ignore the failure if the instance # is already in ERROR. - self._cleanup_volumes(context, instance.uuid, bdms, - raise_exc=False) + + # NOTE(ameeda): The volumes already detached during the above + # _shutdown_instance() call and this is why + # detach is not requested from _cleanup_volumes() + # in this case + + self._cleanup_volumes(context, instance, bdms, + raise_exc=False, detach=False) # if a delete task succeeded, always update vm state and task # state without expecting task state to be DELETING instance.vm_state = vm_states.DELETED @@ -2774,23 +2802,14 @@ LOG.info("Rebuilding instance", instance=instance) - # NOTE(gyee): there are three possible scenarios. - # - # 1. instance is being rebuilt on the same node. In this case, - # recreate should be False and scheduled_node should be None. - # 2. instance is being rebuilt on a node chosen by the - # scheduler (i.e. evacuate). In this case, scheduled_node should - # be specified and recreate should be True. - # 3. instance is being rebuilt on a node chosen by the user. (i.e. - # force evacuate). In this case, scheduled_node is not specified - # and recreate is set to True. - # - # For scenarios #2 and #3, we must do rebuild claim as server is - # being evacuated to a different node. - if recreate or scheduled_node is not None: + if recreate: + # This is an evacuation to a new host, so we need to perform a + # resource claim. rt = self._get_resource_tracker() rebuild_claim = rt.rebuild_claim else: + # This is a rebuild to the same host, so we don't need to make + # a claim since the instance is already on this host. rebuild_claim = claims.NopClaim image_meta = {} @@ -3317,6 +3336,12 @@ LOG.info("Failed to find image %(image_id)s to " "delete", {'image_id': image_id}, instance=instance) + except (exception.ImageDeleteConflict, Exception) as exc: + LOG.info("Failed to delete image %(image_id)s during " + "deleting excess backups. " + "Continuing for next image.. %(exc)s", + {'image_id': image_id, 'exc': exc}, + instance=instance) @wrap_exception() @reverts_task_state @@ -3916,6 +3941,18 @@ reservations, migration, instance_type, clean_shutdown): """Starts the migration of a running instance to another host.""" + try: + self._resize_instance(context, instance, image, migration, + instance_type, clean_shutdown) + except Exception: + with excutils.save_and_reraise_exception(): + rt = self._get_resource_tracker() + node = self.driver.get_available_nodes(refresh=True)[0] + rt.delete_allocation_for_failed_resize( + instance, node, instance_type) + + def _resize_instance(self, context, instance, image, + migration, instance_type, clean_shutdown): with self._error_out_instance_on_exception(context, instance): # TODO(chaochin) Remove this until v5 RPC API # Code downstream may expect extra_specs to be populated since it @@ -4097,6 +4134,23 @@ new host machine. """ + try: + self._finish_resize_helper(context, disk_info, image, instance, + migration) + except Exception: + with excutils.save_and_reraise_exception(): + rt = self._get_resource_tracker() + node = self.driver.get_available_nodes(refresh=True)[0] + rt.delete_allocation_for_failed_resize( + instance, node, instance.new_flavor) + + def _finish_resize_helper(self, context, disk_info, image, instance, + migration): + """Completes the migration process. + + The caller must revert the instance's allocations if the migration + process failed. + """ with self._error_out_instance_on_exception(context, instance): image_meta = objects.ImageMeta.from_dict(image) self._finish_resize(context, instance, migration, @@ -4543,8 +4597,11 @@ # or if we did claim but the spawn failed, because aborting the # instance claim will not remove the allocations. rt.reportclient.delete_allocation_for_instance(instance.uuid) - # FIXME: Umm, shouldn't we be rolling back volume connections - # and port bindings? + # FIXME: Umm, shouldn't we be rolling back port bindings too? + self._terminate_volume_connections(context, instance, bdms) + # The reverts_task_state decorator on unshelve_instance will + # eventually save these updates. + self._nil_out_instance_obj_host_and_node(instance) if image: instance.image_ref = shelved_image_ref @@ -5053,8 +5110,8 @@ "old: %(old_cinfo)s", {'new_cinfo': new_cinfo, 'old_cinfo': old_cinfo}, instance=instance) - self.driver.swap_volume(old_cinfo, new_cinfo, instance, mountpoint, - resize_to) + self.driver.swap_volume(context, old_cinfo, new_cinfo, instance, + mountpoint, resize_to) LOG.debug("swap_volume: Driver volume swap returned, new " "connection_info is now : %(new_cinfo)s", {'new_cinfo': new_cinfo}) @@ -6829,7 +6886,7 @@ try: self._shutdown_instance(context, instance, bdms, notify=False) - self._cleanup_volumes(context, instance.uuid, bdms) + self._cleanup_volumes(context, instance, bdms) except Exception as e: LOG.warning("Periodic cleanup failed to delete " "instance: %s", diff -Nru nova-16.1.0/nova/compute/resource_tracker.py nova-16.1.2/nova/compute/resource_tracker.py --- nova-16.1.0/nova/compute/resource_tracker.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/compute/resource_tracker.py 2018-04-25 09:22:33.000000000 +0000 @@ -516,6 +516,50 @@ return (nodename not in self.compute_nodes or not self.driver.node_is_available(nodename)) + def _check_for_nodes_rebalance(self, context, resources, nodename): + """Check if nodes rebalance has happened. + + The ironic driver maintains a hash ring mapping bare metal nodes + to compute nodes. If a compute dies, the hash ring is rebuilt, and + some of its bare metal nodes (more precisely, those not in ACTIVE + state) are assigned to other computes. + + This method checks for this condition and adjusts the database + accordingly. + + :param context: security context + :param resources: initial values + :param nodename: node name + :returns: True if a suitable compute node record was found, else False + """ + if not self.driver.rebalances_nodes: + return False + + # Its possible ironic just did a node re-balance, so let's + # check if there is a compute node that already has the correct + # hypervisor_hostname. We can re-use that rather than create a + # new one and have to move existing placement allocations + cn_candidates = objects.ComputeNodeList.get_by_hypervisor( + context, nodename) + + if len(cn_candidates) == 1: + cn = cn_candidates[0] + LOG.info("ComputeNode %(name)s moving from %(old)s to %(new)s", + {"name": nodename, "old": cn.host, "new": self.host}) + cn.host = self.host + self.compute_nodes[nodename] = cn + self._copy_resources(cn, resources) + self._setup_pci_tracker(context, cn, resources) + self._update(context, cn) + return True + elif len(cn_candidates) > 1: + LOG.error( + "Found more than one ComputeNode for nodename %s. " + "Please clean up the orphaned ComputeNode records in your DB.", + nodename) + + return False + def _init_compute_node(self, context, resources): """Initialize the compute node if it does not already exist. @@ -551,6 +595,9 @@ self._update(context, cn) return + if self._check_for_nodes_rebalance(context, resources, nodename): + return + # there was no local copy and none in the database # so we need to create a new compute node. This needs # to be initialized with resource values. diff -Nru nova-16.1.0/nova/conductor/manager.py nova-16.1.2/nova/conductor/manager.py --- nova-16.1.0/nova/conductor/manager.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/conductor/manager.py 2018-04-25 09:22:33.000000000 +0000 @@ -888,6 +888,11 @@ if migration: migration.status = 'error' migration.save() + # Rollback the image_ref if a new one was provided (this + # only happens in the rebuild case, not evacuate). + if orig_image_ref and orig_image_ref != image_ref: + instance.image_ref = orig_image_ref + instance.save() request_spec = request_spec.to_legacy_request_spec_dict() with excutils.save_and_reraise_exception(): self._set_vm_state_and_notify(context, instance.uuid, @@ -902,6 +907,11 @@ if migration: migration.status = 'error' migration.save() + # Rollback the image_ref if a new one was provided (this + # only happens in the rebuild case, not evacuate). + if orig_image_ref and orig_image_ref != image_ref: + instance.image_ref = orig_image_ref + instance.save() request_spec = request_spec.to_legacy_request_spec_dict() with excutils.save_and_reraise_exception(): self._set_vm_state_and_notify(context, instance.uuid, @@ -975,7 +985,8 @@ return tags def _bury_in_cell0(self, context, request_spec, exc, - build_requests=None, instances=None): + build_requests=None, instances=None, + block_device_mapping=None): """Ensure all provided build_requests and instances end up in cell0. Cell0 is the fake cell we schedule dead instances to when we can't @@ -1012,6 +1023,14 @@ for instance in instances_by_uuid.values(): with obj_target_cell(instance, cell0) as cctxt: instance.create() + + # NOTE(mnaser): In order to properly clean-up volumes after + # being buried in cell0, we need to store BDMs. + if block_device_mapping: + self._create_block_device_mapping( + cell0, instance.flavor, instance.uuid, + block_device_mapping) + # Use the context targeted to cell0 here since the instance is # now in cell0. self._set_vm_state_and_notify( @@ -1050,7 +1069,8 @@ except Exception as exc: LOG.exception('Failed to schedule instances') self._bury_in_cell0(context, request_specs[0], exc, - build_requests=build_requests) + build_requests=build_requests, + block_device_mapping=block_device_mapping) return host_mapping_cache = {} @@ -1070,9 +1090,10 @@ LOG.error('No host-to-cell mapping found for selected ' 'host %(host)s. Setup is incomplete.', {'host': host['host']}) - self._bury_in_cell0(context, request_spec, exc, - build_requests=[build_request], - instances=[instance]) + self._bury_in_cell0( + context, request_spec, exc, + build_requests=[build_request], instances=[instance], + block_device_mapping=block_device_mapping) # This is a placeholder in case the quota recheck fails. instances.append(None) continue diff -Nru nova-16.1.0/nova/conf/libvirt.py nova-16.1.2/nova/conf/libvirt.py --- nova-16.1.0/nova/conf/libvirt.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/conf/libvirt.py 2018-04-25 09:22:33.000000000 +0000 @@ -23,6 +23,8 @@ from oslo_config import cfg +from oslo_config import types + from nova.conf import paths @@ -513,6 +515,58 @@ This would result in an error and the instance won't be launched. * ``virt_type``: Only the virtualization types ``kvm`` and ``qemu`` use this. """), + cfg.ListOpt( + 'cpu_model_extra_flags', + item_type=types.String( + choices=['pcid'], + ignore_case=True, + ), + default=[], + help=""" +This allows specifying granular CPU feature flags when specifying CPU +models. For example, to explicitly specify the ``pcid`` +(Process-Context ID, an Intel processor feature) flag to the "IvyBridge" +virtual CPU model:: + + [libvirt] + cpu_mode = custom + cpu_model = IvyBridge + cpu_model_extra_flags = pcid + +Currently, the choice is restricted to only one option: ``pcid`` (the +option is case-insensitive, so ``PCID`` is also valid). This flag is +now required to address the guest performance degradation as a result of +applying the "Meltdown" CVE fixes on certain Intel CPU models. + +Note that when using this config attribute to set the 'PCID' CPU flag, +not all virtual (i.e. libvirt / QEMU) CPU models need it: + +* The only virtual CPU models that include the 'PCID' capability are + Intel "Haswell", "Broadwell", and "Skylake" variants. + +* The libvirt / QEMU CPU models "Nehalem", "Westmere", "SandyBridge", + and "IvyBridge" will _not_ expose the 'PCID' capability by default, + even if the host CPUs by the same name include it. I.e. 'PCID' needs + to be explicitly specified when using the said virtual CPU models. + +For now, the ``cpu_model_extra_flags`` config attribute is valid only in +combination with ``cpu_mode`` + ``cpu_model`` options. + +Besides ``custom``, the libvirt driver has two other CPU modes: The +default, ``host-model``, tells it to do the right thing with respect to +handling 'PCID' CPU flag for the guest -- *assuming* you are running +updated processor microcode, host and guest kernel, libvirt, and QEMU. +The other mode, ``host-passthrough``, checks if 'PCID' is available in +the hardware, and if so directly passes it through to the Nova guests. +Thus, in context of 'PCID', with either of these CPU modes +(``host-model`` or ``host-passthrough``), there is no need to use the +``cpu_model_extra_flags``. + +Related options: + +* cpu_mode +* cpu_model +"""), cfg.StrOpt('snapshots_directory', default='$instances_path/snapshots', help='Location where libvirt driver will store snapshots ' diff -Nru nova-16.1.0/nova/exception.py nova-16.1.2/nova/exception.py --- nova-16.1.0/nova/exception.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/exception.py 2018-04-25 09:22:33.000000000 +0000 @@ -652,6 +652,10 @@ msg_fmt = _("Image %(image_id)s could not be found.") +class ImageDeleteConflict(NovaException): + msg_fmt = _("Conflict deleting image. Reason: %(reason)s.") + + class PreserveEphemeralNotSupported(Invalid): msg_fmt = _("The current driver does not support " "preserving ephemeral partitions.") @@ -2066,6 +2070,14 @@ msg_fmt = _("Resource provider has allocations.") +class ResourceProviderRetrievalFailed(NovaException): + msg_fmt = _("Failed to get resource provider with UUID %(uuid)s") + + +class ResourceProviderCreationFailed(NovaException): + msg_fmt = _("Failed to create resource provider %(name)s") + + class InventoryWithResourceClassNotFound(NotFound): msg_fmt = _("No inventory of class %(resource_class)s found.") diff -Nru nova-16.1.0/nova/image/glance.py nova-16.1.2/nova/image/glance.py --- nova-16.1.0/nova/image/glance.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/image/glance.py 2018-04-25 09:22:33.000000000 +0000 @@ -533,6 +533,7 @@ :raises: ImageNotFound if the image does not exist. :raises: NotAuthorized if the user is not an owner. :raises: ImageNotAuthorized if the user is not authorized. + :raises: ImageDeleteConflict if the image is conflicted to delete. """ try: @@ -541,6 +542,8 @@ raise exception.ImageNotFound(image_id=image_id) except glanceclient.exc.HTTPForbidden: raise exception.ImageNotAuthorized(image_id=image_id) + except glanceclient.exc.HTTPConflict as exc: + raise exception.ImageDeleteConflict(reason=six.text_type(exc)) return True diff -Nru nova-16.1.0/nova/network/model.py nova-16.1.2/nova/network/model.py --- nova-16.1.0/nova/network/model.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/network/model.py 2018-04-25 09:22:33.000000000 +0000 @@ -409,8 +409,11 @@ return not self.__eq__(other) def fixed_ips(self): - return [fixed_ip for subnet in self['network']['subnets'] - for fixed_ip in subnet['ips']] + if self['network']: + return [fixed_ip for subnet in self['network']['subnets'] + for fixed_ip in subnet['ips']] + else: + return [] def floating_ips(self): return [floating_ip for fixed_ip in self.fixed_ips() diff -Nru nova-16.1.0/nova/network/neutronv2/api.py nova-16.1.2/nova/network/neutronv2/api.py --- nova-16.1.0/nova/network/neutronv2/api.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/network/neutronv2/api.py 2018-04-25 09:22:33.000000000 +0000 @@ -127,7 +127,7 @@ "admin credential located in nova.conf")) raise exception.NeutronAdminCredentialConfigurationInvalid() except neutron_client_exc.Forbidden as e: - raise exception.Forbidden(e) + raise exception.Forbidden(six.text_type(e)) return ret return wrapper diff -Nru nova-16.1.0/nova/objects/host_mapping.py nova-16.1.2/nova/objects/host_mapping.py --- nova-16.1.0/nova/objects/host_mapping.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/objects/host_mapping.py 2018-04-25 09:22:33.000000000 +0000 @@ -167,7 +167,7 @@ return base.obj_make_list(context, cls(), HostMapping, db_mappings) -def _check_and_create_host_mappings(ctxt, cm, compute_nodes, status_fn): +def _check_and_create_node_host_mappings(ctxt, cm, compute_nodes, status_fn): host_mappings = [] for compute in compute_nodes: status_fn(_("Checking host mapping for compute host " @@ -189,7 +189,41 @@ return host_mappings -def discover_hosts(ctxt, cell_uuid=None, status_fn=None): +def _check_and_create_service_host_mappings(ctxt, cm, services, status_fn): + host_mappings = [] + for service in services: + try: + HostMapping.get_by_host(ctxt, service.host) + except exception.HostMappingNotFound: + status_fn(_('Creating host mapping for service %(srv)s') % + {'srv': service.host}) + host_mapping = HostMapping( + ctxt, host=service.host, + cell_mapping=cm) + host_mapping.create() + host_mappings.append(host_mapping) + return host_mappings + + +def _check_and_create_host_mappings(ctxt, cm, status_fn, by_service): + from nova import objects + + if by_service: + services = objects.ServiceList.get_by_binary( + ctxt, 'nova-compute', include_disabled=True) + added_hm = _check_and_create_service_host_mappings(ctxt, cm, + services, + status_fn) + else: + compute_nodes = objects.ComputeNodeList.get_all_by_not_mapped( + ctxt, 1) + added_hm = _check_and_create_node_host_mappings(ctxt, cm, + compute_nodes, + status_fn) + return added_hm + + +def discover_hosts(ctxt, cell_uuid=None, status_fn=None, by_service=False): # TODO(alaski): If this is not run on a host configured to use the API # database most of the lookups below will fail and may not provide a # great error message. Add a check which will raise a useful error @@ -212,21 +246,19 @@ status_fn(_('Skipping cell0 since it does not contain hosts.')) continue if 'name' in cm and cm.name: - status_fn(_("Getting compute nodes from cell '%(name)s': " + status_fn(_("Getting computes from cell '%(name)s': " "%(uuid)s") % {'name': cm.name, 'uuid': cm.uuid}) else: - status_fn(_("Getting compute nodes from cell: %(uuid)s") % + status_fn(_("Getting computes from cell: %(uuid)s") % {'uuid': cm.uuid}) with context.target_cell(ctxt, cm) as cctxt: - compute_nodes = objects.ComputeNodeList.get_all_by_not_mapped( - cctxt, 1) + added_hm = _check_and_create_host_mappings(cctxt, cm, status_fn, + by_service) status_fn(_('Found %(num)s unmapped computes in cell: %(uuid)s') % - {'num': len(compute_nodes), + {'num': len(added_hm), 'uuid': cm.uuid}) - added_hm = _check_and_create_host_mappings(cctxt, cm, - compute_nodes, - status_fn) + host_mappings.extend(added_hm) return host_mappings diff -Nru nova-16.1.0/nova/objects/instance.py nova-16.1.2/nova/objects/instance.py --- nova-16.1.0/nova/objects/instance.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/objects/instance.py 2018-04-25 09:22:33.000000000 +0000 @@ -832,9 +832,10 @@ self.obj_reset_changes() def _load_generic(self, attrname): - instance = self.__class__.get_by_uuid(self._context, - uuid=self.uuid, - expected_attrs=[attrname]) + with utils.temporary_mutation(self._context, read_deleted='yes'): + instance = self.__class__.get_by_uuid(self._context, + uuid=self.uuid, + expected_attrs=[attrname]) # NOTE(danms): Never allow us to recursively-load if instance.obj_attr_is_set(attrname): @@ -1231,18 +1232,24 @@ db_inst_list = db.instance_get_all_by_filters( context, filters, sort_key, sort_dir, limit=limit, marker=marker, columns_to_join=_expected_cols(expected_attrs)) - return _make_instance_list(context, cls(), db_inst_list, - expected_attrs) + return db_inst_list @base.remotable_classmethod def get_by_filters(cls, context, filters, sort_key='created_at', sort_dir='desc', limit=None, marker=None, expected_attrs=None, use_slave=False, sort_keys=None, sort_dirs=None): - return cls._get_by_filters_impl( + db_inst_list = cls._get_by_filters_impl( context, filters, sort_key=sort_key, sort_dir=sort_dir, limit=limit, marker=marker, expected_attrs=expected_attrs, use_slave=use_slave, sort_keys=sort_keys, sort_dirs=sort_dirs) + # NOTE(melwitt): _make_instance_list could result in joined objects' + # (from expected_attrs) _from_db_object methods being called during + # Instance._from_db_object, each of which might choose to perform + # database writes. So, we call this outside of _get_by_filters_impl to + # avoid being nested inside a 'reader' database transaction context. + return _make_instance_list(context, cls(), db_inst_list, + expected_attrs) @staticmethod @db.select_db_reader_mode diff -Nru nova-16.1.0/nova/objects/request_spec.py nova-16.1.2/nova/objects/request_spec.py --- nova-16.1.0/nova/objects/request_spec.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/objects/request_spec.py 2018-04-25 09:22:33.000000000 +0000 @@ -506,6 +506,9 @@ if 'instance_group' in spec and spec.instance_group: spec.instance_group.members = None spec.instance_group.hosts = None + # NOTE(mriedem): Don't persist retries since those are per-request + if 'retry' in spec and spec.retry: + spec.retry = None db_updates = {'spec': jsonutils.dumps(spec.obj_to_primitive())} if 'instance_uuid' in updates: diff -Nru nova-16.1.0/nova/objects/resource_provider.py nova-16.1.2/nova/objects/resource_provider.py --- nova-16.1.0/nova/objects/resource_provider.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/objects/resource_provider.py 2018-04-25 09:22:33.000000000 +0000 @@ -1850,8 +1850,7 @@ resource_class_id=rc_id, consumer_id=alloc.consumer_id, used=alloc.used) - result = conn.execute(ins_stmt) - alloc.id = result.lastrowid + conn.execute(ins_stmt) # Generation checking happens here. If the inventory for # this resource provider changed out from under us, diff -Nru nova-16.1.0/nova/scheduler/client/report.py nova-16.1.2/nova/scheduler/client/report.py --- nova-16.1.0/nova/scheduler/client/report.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/scheduler/client/report.py 2018-04-25 09:22:33.000000000 +0000 @@ -389,10 +389,10 @@ """Queries the placement API for a resource provider record with the supplied UUID. - Returns a dict of resource provider information if found or None if no - such resource provider could be found. - :param uuid: UUID identifier for the resource provider to look up + :return: A dict of resource provider information if found or None if no + such resource provider could be found. + :raise: ResourceProviderRetrievalFailed on error. """ resp = self.get("/resource_providers/%s" % uuid) if resp.status_code == 200: @@ -412,16 +412,18 @@ 'placement_req_id': placement_req_id, } LOG.error(msg, args) + raise exception.ResourceProviderRetrievalFailed(uuid=uuid) @safe_connect def _create_resource_provider(self, uuid, name): """Calls the placement API to create a new resource provider record. - Returns a dict of resource provider information object representing - the newly-created resource provider. - :param uuid: UUID of the new resource provider :param name: Name of the resource provider + :return: A dict of resource provider information object representing + the newly-created resource provider. + :raise: ResourceProviderCreationFailed or + ResourceProviderRetrievalFailed on error. """ url = "/resource_providers" payload = { @@ -445,7 +447,10 @@ name=name, generation=0, ) - elif resp.status_code == 409: + + # TODO(efried): Push error codes from placement, and use 'em. + name_conflict = 'Conflicting resource provider name:' + if resp.status_code == 409 and name_conflict not in resp.text: # Another thread concurrently created a resource provider with the # same UUID. Log a warning and then just return the resource # provider object from _get_resource_provider() @@ -458,17 +463,19 @@ } LOG.info(msg, args) return self._get_resource_provider(uuid) - else: - msg = _LE("[%(placement_req_id)s] Failed to create resource " - "provider record in placement API for UUID %(uuid)s. " - "Got %(status_code)d: %(err_text)s.") - args = { - 'uuid': uuid, - 'status_code': resp.status_code, - 'err_text': resp.text, - 'placement_req_id': placement_req_id, - } - LOG.error(msg, args) + + # A provider with the same *name* already exists, or some other error. + msg = ("[%(placement_req_id)s] Failed to create resource provider " + "record in placement API for UUID %(uuid)s. Got " + "%(status_code)d: %(err_text)s.") + args = { + 'uuid': uuid, + 'status_code': resp.status_code, + 'err_text': resp.text, + 'placement_req_id': placement_req_id, + } + LOG.error(msg, args) + raise exception.ResourceProviderCreationFailed(name=name) def _ensure_resource_provider(self, uuid, name=None): """Ensures that the placement API has a record of a resource provider @@ -479,7 +486,11 @@ The found or created resource provider object is returned from this method. If the resource provider object for the supplied uuid was not found and the resource provider record could not be created in the - placement API, we return None. + placement API, an exception is raised. + + If this method returns successfully, callers are assured both that + the placement API contains a record of the provider and the local cache + of resource provider information contains a record of the provider. :param uuid: UUID identifier for the resource provider to ensure exists :param name: Optional name for the resource provider if the record @@ -502,10 +513,8 @@ rp = self._get_resource_provider(uuid) if rp is None: - name = name or uuid - rp = self._create_resource_provider(uuid, name) - if rp is None: - return + rp = self._create_resource_provider(uuid, name or uuid) + msg = "Grabbing aggregate associations for resource provider %s" LOG.debug(msg, uuid) aggs = self._get_provider_aggregates(uuid) diff -Nru nova-16.1.0/nova/scheduler/filters/isolated_hosts_filter.py nova-16.1.2/nova/scheduler/filters/isolated_hosts_filter.py --- nova-16.1.0/nova/scheduler/filters/isolated_hosts_filter.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/scheduler/filters/isolated_hosts_filter.py 2018-04-25 09:22:25.000000000 +0000 @@ -59,7 +59,10 @@ return ((not restrict_isolated_hosts_to_isolated_images) or (host_state.host not in isolated_hosts)) - image_ref = spec_obj.image.id if spec_obj.image else None + # Check to see if the image id is set since volume-backed instances + # can be created without an imageRef in the server create request. + image_ref = spec_obj.image.id \ + if spec_obj.image and 'id' in spec_obj.image else None image_isolated = image_ref in isolated_images host_isolated = host_state.host in isolated_hosts diff -Nru nova-16.1.0/nova/tests/fixtures.py nova-16.1.2/nova/tests/fixtures.py --- nova-16.1.0/nova/tests/fixtures.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/fixtures.py 2018-04-25 09:22:33.000000000 +0000 @@ -1280,6 +1280,7 @@ self.swap_error = False self.swap_volume_instance_uuid = None self.swap_volume_instance_error_uuid = None + self.reserved_volumes = list() # This is a map of instance UUIDs mapped to a list of volume IDs. # This map gets updated on attach/detach operations. self.attachments = collections.defaultdict(list) @@ -1338,20 +1339,15 @@ break else: # This is a test that does not care about the actual details. + reserved_volume = (volume_id in self.reserved_volumes) volume = { - 'status': 'available', + 'status': 'attaching' if reserved_volume else 'available', 'display_name': 'TEST2', 'attach_status': 'detached', 'id': volume_id, 'size': 1 } - # update the status based on existing attachments - has_attachment = any( - [volume['id'] in attachments - for attachments in self.attachments.values()]) - volume['status'] = 'attached' if has_attachment else 'detached' - # Check for our special image-backed volume. if volume_id == self.IMAGE_BACKED_VOL: # Make it a bootable volume. @@ -1374,7 +1370,16 @@ new_volume_id, error): return {'save_volume_id': new_volume_id} + def fake_reserve_volume(self_api, context, volume_id): + self.reserved_volumes.append(volume_id) + def fake_unreserve_volume(self_api, context, volume_id): + # NOTE(mnaser): It's possible that we unreserve a volume that was + # never reserved (ex: instance.volume_attach.error + # notification tests) + if volume_id in self.reserved_volumes: + self.reserved_volumes.remove(volume_id) + # Signaling that swap_volume has encountered the error # from initialize_connection and is working on rolling back # the reservation on SWAP_ERR_NEW_VOL. @@ -1396,6 +1401,12 @@ def fake_detach(_self, context, volume_id, instance_uuid=None, attachment_id=None): + # NOTE(mnaser): It's possible that we unreserve a volume that was + # never reserved (ex: instance.volume_attach.error + # notification tests) + if volume_id in self.reserved_volumes: + self.reserved_volumes.remove(volume_id) + if instance_uuid is not None: # If the volume isn't attached to this instance it will # result in a ValueError which indicates a broken test or @@ -1419,7 +1430,7 @@ 'nova.volume.cinder.API.migrate_volume_completion', fake_migrate_volume_completion) self.test.stub_out('nova.volume.cinder.API.reserve_volume', - lambda *args, **kwargs: None) + fake_reserve_volume) self.test.stub_out('nova.volume.cinder.API.roll_detaching', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.API.terminate_connection', diff -Nru nova-16.1.0/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json.tpl nova-16.1.2/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json.tpl --- nova-16.1.0/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json.tpl 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-detail-resp.json.tpl 2018-04-25 09:22:33.000000000 +0000 @@ -22,7 +22,7 @@ "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, - "hypervisor_hostname": "fake-mini", + "hypervisor_hostname": "host1", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": %(hypervisor_id)s, diff -Nru nova-16.1.0/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json.tpl nova-16.1.2/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json.tpl --- nova-16.1.0/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json.tpl 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.33/hypervisors-list-resp.json.tpl 2018-04-25 09:22:25.000000000 +0000 @@ -1,7 +1,7 @@ { "hypervisors": [ { - "hypervisor_hostname": "fake-mini", + "hypervisor_hostname": "host1", "id": 2, "state": "up", "status": "enabled" diff -Nru nova-16.1.0/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json.tpl nova-16.1.2/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json.tpl --- nova-16.1.0/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json.tpl 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/api_sample_tests/api_samples/os-hypervisors/v2.53/hypervisors-detail-resp.json.tpl 2018-04-25 09:22:33.000000000 +0000 @@ -22,7 +22,7 @@ "host_ip": "%(ip)s", "free_disk_gb": 1028, "free_ram_mb": 7680, - "hypervisor_hostname": "fake-mini", + "hypervisor_hostname": "host2", "hypervisor_type": "fake", "hypervisor_version": 1000, "id": "%(hypervisor_id)s", diff -Nru nova-16.1.0/nova/tests/functional/api_sample_tests/test_hypervisors.py nova-16.1.2/nova/tests/functional/api_sample_tests/test_hypervisors.py --- nova-16.1.0/nova/tests/functional/api_sample_tests/test_hypervisors.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/api_sample_tests/test_hypervisors.py 2018-04-25 09:22:25.000000000 +0000 @@ -18,6 +18,7 @@ from nova.cells import utils as cells_utils from nova import objects from nova.tests.functional.api_sample_tests import api_sample_base +from nova.virt import fake class HypervisorsSampleJsonTests(api_sample_base.ApiSampleTestBaseV21): @@ -155,7 +156,10 @@ self.api.microversion = self.microversion # Start a new compute service to fake a record with hypervisor id=2 # for pagination test. - self.start_service('compute', host='host1') + host = 'host1' + fake.set_nodes([host]) + self.addCleanup(fake.restore_nodes) + self.start_service('compute', host=host) def test_hypervisors_list(self): response = self._do_get('os-hypervisors?limit=1&marker=1') @@ -200,7 +204,10 @@ def test_hypervisors_detail(self): # Start another compute service to get a 2nd compute for paging tests. - service_2 = self.start_service('compute', host='host2').service_ref + host = 'host2' + fake.set_nodes([host]) + self.addCleanup(fake.restore_nodes) + service_2 = self.start_service('compute', host=host).service_ref compute_node_2 = service_2.compute_node marker = self.compute_node_1.uuid subs = { diff -Nru nova-16.1.0/nova/tests/functional/db/test_resource_provider.py nova-16.1.2/nova/tests/functional/db/test_resource_provider.py --- nova-16.1.0/nova/tests/functional/db/test_resource_provider.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/db/test_resource_provider.py 2018-04-25 09:22:33.000000000 +0000 @@ -841,7 +841,6 @@ disk_allocation.used) self.assertEqual(DISK_ALLOCATION['consumer_id'], disk_allocation.consumer_id) - self.assertIsInstance(disk_allocation.id, int) allocations = objects.AllocationList.get_all_by_resource_provider_uuid( self.ctx, resource_provider.uuid) @@ -997,12 +996,13 @@ allocations = objects.AllocationList.get_all_by_resource_provider_uuid( self.ctx, rp.uuid) self.assertEqual(1, len(allocations)) - objects.Allocation._destroy(self.ctx, allocation.id) + allocation_id = allocations[0].id + objects.Allocation._destroy(self.ctx, allocation_id) allocations = objects.AllocationList.get_all_by_resource_provider_uuid( self.ctx, rp.uuid) self.assertEqual(0, len(allocations)) self.assertRaises(exception.NotFound, objects.Allocation._destroy, - self.ctx, allocation.id) + self.ctx, allocation_id) def test_get_allocations_from_db(self): rp, allocation = self._make_allocation() diff -Nru nova-16.1.0/nova/tests/functional/regressions/test_bug_1404867.py nova-16.1.2/nova/tests/functional/regressions/test_bug_1404867.py --- nova-16.1.0/nova/tests/functional/regressions/test_bug_1404867.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/regressions/test_bug_1404867.py 2018-04-25 09:22:33.000000000 +0000 @@ -0,0 +1,83 @@ +# Copyright 2018 VEXXHOST, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.compute import api as compute_api +from nova.tests import fixtures as nova_fixtures +from nova.tests.functional import integrated_helpers + + +class DeleteWithReservedVolumes(integrated_helpers._IntegratedTestBase, + integrated_helpers.InstanceHelperMixin): + """Test deleting of an instance in error state that has a reserved volume. + + This test boots a server from volume which will fail to be scheduled, + ending up in ERROR state with no host assigned and then deletes the server. + + Since the server failed to be scheduled, a local delete should run which + will make sure that reserved volumes at the API layer are properly cleaned + up. + + The regression is that Nova would not clean up the reserved volumes and + the volume would be stuck in 'attaching' state. + """ + api_major_version = 'v2.1' + microversion = 'latest' + + def _setup_compute_service(self): + # Override `_setup_compute_service` to make sure that we do not start + # up the compute service, making sure that the instance will end up + # failing to find a valid host. + pass + + def _create_error_server(self, volume_id): + server = self.api.post_server({ + 'server': { + 'flavorRef': '1', + 'name': 'bfv-delete-server-in-error-status', + 'networks': 'none', + 'block_device_mapping_v2': [ + { + 'boot_index': 0, + 'uuid': volume_id, + 'source_type': 'volume', + 'destination_type': 'volume' + }, + ] + } + }) + return self._wait_for_state_change(self.api, server, 'ERROR') + + @mock.patch('nova.objects.service.get_minimum_version_all_cells', + return_value=compute_api.BFV_RESERVE_MIN_COMPUTE_VERSION) + def test_delete_with_reserved_volumes(self, mock_version_get=None): + self.cinder = self.useFixture(nova_fixtures.CinderFixture(self)) + + # Create a server which should go to ERROR state because we don't + # have any active computes. + volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL + server = self._create_error_server(volume_id) + + # The status of the volume at this point should be 'attaching' as it + # is reserved by Nova by the API. + self.assertIn(volume_id, self.cinder.reserved_volumes) + + # Delete this server, which should delete BDMs and remove the + # reservation on the instances. + self.api.delete_server(server['id']) + + # The volume should no longer be reserved as the deletion of the + # server should have released all the resources. + self.assertNotIn(volume_id, self.cinder.reserved_volumes) diff -Nru nova-16.1.0/nova/tests/functional/regressions/test_bug_1670627.py nova-16.1.2/nova/tests/functional/regressions/test_bug_1670627.py --- nova-16.1.0/nova/tests/functional/regressions/test_bug_1670627.py 2018-02-15 23:54:41.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/regressions/test_bug_1670627.py 2018-04-25 09:22:25.000000000 +0000 @@ -59,6 +59,7 @@ self.start_service('conductor') self.start_service('scheduler') + self.start_service('consoleauth') # We don't actually start a compute service; this way we don't have any # compute hosts to schedule the instance to and will go into error and diff -Nru nova-16.1.0/nova/tests/functional/regressions/test_bug_1689692.py nova-16.1.2/nova/tests/functional/regressions/test_bug_1689692.py --- nova-16.1.0/nova/tests/functional/regressions/test_bug_1689692.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/regressions/test_bug_1689692.py 2018-04-25 09:22:33.000000000 +0000 @@ -54,6 +54,7 @@ self.start_service('conductor') self.flags(driver='chance_scheduler', group='scheduler') self.start_service('scheduler') + self.start_service('consoleauth') # We don't start the compute service because we want NoValidHost so # all of the instances go into ERROR state and get put into cell0. self.useFixture(cast_as_call.CastAsCall(self)) diff -Nru nova-16.1.0/nova/tests/functional/regressions/test_bug_1718512.py nova-16.1.2/nova/tests/functional/regressions/test_bug_1718512.py --- nova-16.1.0/nova/tests/functional/regressions/test_bug_1718512.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/regressions/test_bug_1718512.py 2018-04-25 09:22:33.000000000 +0000 @@ -0,0 +1,155 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.compute import manager as compute_manager +from nova import context as nova_context +from nova import objects +from nova.scheduler import weights +from nova import test +from nova.tests import fixtures as nova_fixtures +from nova.tests.functional import integrated_helpers +from nova.tests.unit.image import fake as image_fake +from nova.tests.unit import policy_fixture +from nova.virt import fake + + +class HostNameWeigher(weights.BaseHostWeigher): + def _weigh_object(self, host_state, weight_properties): + """Arbitrary preferring host1 over host2 over host3.""" + weights = {'host1': 100, 'host2': 50, 'host3': 1} + return weights.get(host_state.host, 0) + + +class TestRequestSpecRetryReschedule(test.TestCase, + integrated_helpers.InstanceHelperMixin): + """Regression test for bug 1718512 introduced in Newton. + + Contains a test for a regression where an instance builds on one host, + then is resized. During the resize, the first attempted host fails and + the resize is rescheduled to another host which passes. The failed host + is persisted in the RequestSpec.retry field by mistake. Then later when + trying to live migrate the instance to the same host that failed during + resize, it is rejected by the RetryFilter because it's already in the + RequestSpec.retry field. + """ + def setUp(self): + super(TestRequestSpecRetryReschedule, self).setUp() + self.useFixture(policy_fixture.RealPolicyFixture()) + + # The NeutronFixture is needed to stub out validate_networks in API. + self.useFixture(nova_fixtures.NeutronFixture(self)) + + # We need the computes reporting into placement for the filter + # scheduler to pick a host. + self.useFixture(nova_fixtures.PlacementFixture()) + + api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( + api_version='v2.1')) + # The admin API is used to get the server details to verify the + # host on which the server was built. + self.admin_api = api_fixture.admin_api + self.api = api_fixture.api + + # the image fake backend needed for image discovery + image_fake.stub_out_image_service(self) + self.addCleanup(image_fake.FakeImageService_reset) + + self.start_service('conductor') + + # We have to get the image before we use 2.latest otherwise we'll get + # a 404 on the /images proxy API because of 2.36. + self.image_id = self.api.get_images()[0]['id'] + + # Use the latest microversion available to make sure something does + # not regress in new microversions; cap as necessary. + self.admin_api.microversion = 'latest' + self.api.microversion = 'latest' + + # The consoleauth service is needed for deleting console tokens when + # the server is deleted. + self.start_service('consoleauth') + + # Use our custom weigher defined above to make sure that we have + # a predictable scheduling sort order. + self.flags(weight_classes=[__name__ + '.HostNameWeigher'], + group='filter_scheduler') + self.start_service('scheduler') + + # Let's now start three compute nodes as we said above. + for host in ['host1', 'host2', 'host3']: + fake.set_nodes([host]) + self.addCleanup(fake.restore_nodes) + self.start_service('compute', host=host) + + def _stub_resize_failure(self, failed_host): + actual_prep_resize = compute_manager.ComputeManager._prep_resize + + def fake_prep_resize(_self, *args, **kwargs): + if _self.host == failed_host: + raise Exception('%s:fake_prep_resize' % failed_host) + actual_prep_resize(_self, *args, **kwargs) + self.stub_out('nova.compute.manager.ComputeManager._prep_resize', + fake_prep_resize) + + def test_resize_with_reschedule_then_live_migrate(self): + """Tests the following scenario: + + - Server is created on host1 successfully. + - Server is resized; host2 is tried and fails, and rescheduled to + host3. + - Then try to live migrate the instance to host2 which should work. + """ + flavors = self.api.get_flavors() + flavor1 = flavors[0] + flavor2 = flavors[1] + if flavor1["disk"] > flavor2["disk"]: + # Make sure that flavor1 is smaller + flavor1, flavor2 = flavor2, flavor1 + + # create the instance which should go to host1 + server = self.admin_api.post_server( + dict(server=self._build_minimal_create_server_request( + self.api, 'test_resize_with_reschedule_then_live_migrate', + self.image_id, flavor_id=flavor1['id'], networks='none'))) + server = self._wait_for_state_change(self.admin_api, server, 'ACTIVE') + self.assertEqual('host1', server['OS-EXT-SRV-ATTR:host']) + + # Stub out the resize to fail on host2, which will trigger a reschedule + # to host3. + self._stub_resize_failure('host2') + + # Resize the server to flavor2, which should make it ultimately end up + # on host3. + data = {'resize': {'flavorRef': flavor2['id']}} + self.api.post_server_action(server['id'], data) + server = self._wait_for_state_change(self.admin_api, server, + 'VERIFY_RESIZE') + self.assertEqual('host3', server['OS-EXT-SRV-ATTR:host']) + self.api.post_server_action(server['id'], {'confirmResize': None}, + check_response_status=[204]) + server = self._wait_for_state_change(self.admin_api, server, 'ACTIVE') + + # Now live migrate the server to host2 specifically, which previously + # failed the resize attempt but here it should pass. + data = {'os-migrateLive': {'host': 'host2', 'block_migration': 'auto'}} + self.admin_api.post_server_action(server['id'], data) + server = self._wait_for_state_change(self.admin_api, server, 'ACTIVE') + self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) + # NOTE(mriedem): The instance status effectively goes to ACTIVE before + # the migration status is changed to "completed" since + # post_live_migration_at_destination changes the instance status + # and _post_live_migration changes the migration status later. So we + # need to poll the migration record until it's complete or we timeout. + self._wait_for_migration_status(server, 'completed') + reqspec = objects.RequestSpec.get_by_instance_uuid( + nova_context.get_admin_context(), server['id']) + self.assertIsNone(reqspec.retry) diff -Nru nova-16.1.0/nova/tests/functional/regressions/test_bug_1746483.py nova-16.1.2/nova/tests/functional/regressions/test_bug_1746483.py --- nova-16.1.0/nova/tests/functional/regressions/test_bug_1746483.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/regressions/test_bug_1746483.py 2018-04-25 09:22:33.000000000 +0000 @@ -0,0 +1,102 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from nova import config +from nova import test +from nova.tests import fixtures as nova_fixtures +from nova.tests.functional import integrated_helpers +from nova.tests.unit.image import fake as image_fakes +from nova.tests.unit import policy_fixture +from nova import utils +from nova.virt import fake + +CONF = config.CONF + + +class TestBootFromVolumeIsolatedHostsFilter( + test.TestCase, integrated_helpers.InstanceHelperMixin): + """Regression test for bug #1746483 + + The IsolatedHostsFilter checks for images restricted to certain hosts via + config options. When creating a server from a root volume, the image is + in the volume (and it's related metadata from Cinder). When creating a + volume-backed server, the imageRef is not required. + + The regression is that the RequestSpec.image.id field is not set and the + IsolatedHostsFilter blows up trying to load the image id. + """ + def setUp(self): + super(TestBootFromVolumeIsolatedHostsFilter, self).setUp() + + self.useFixture(policy_fixture.RealPolicyFixture()) + self.useFixture(nova_fixtures.NeutronFixture(self)) + self.useFixture(nova_fixtures.CinderFixture(self)) + self.useFixture(nova_fixtures.PlacementFixture()) + + api_fixture = self.useFixture(nova_fixtures.OSAPIFixture( + api_version='v2.1')) + + self.api = api_fixture.admin_api + + image_fakes.stub_out_image_service(self) + self.addCleanup(image_fakes.FakeImageService_reset) + + self.start_service('conductor') + + # Add the IsolatedHostsFilter to the list of enabled filters since it + # is not enabled by default. + enabled_filters = CONF.filter_scheduler.enabled_filters + enabled_filters.append('IsolatedHostsFilter') + self.flags( + enabled_filters=enabled_filters, + isolated_images=[image_fakes.AUTO_DISK_CONFIG_ENABLED_IMAGE_UUID], + isolated_hosts=['host1'], + restrict_isolated_hosts_to_isolated_images=True, + group='filter_scheduler') + self.start_service('scheduler') + + # Create two compute nodes/services so we can restrict the image + # we'll use to one of the hosts. + for host in ('host1', 'host2'): + fake.set_nodes([host]) + self.addCleanup(fake.restore_nodes) + self.start_service('compute', host=host) + + def test_boot_from_volume_with_isolated_image(self): + # Create our server without networking just to keep things simple. + image_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL + server_req_body = { + # There is no imageRef because this is boot from volume. + 'server': { + 'flavorRef': '1', # m1.tiny from DefaultFlavorsFixture, + 'name': 'test_boot_from_volume_with_isolated_image', + 'networks': 'none', + 'block_device_mapping_v2': [{ + 'boot_index': 0, + 'uuid': image_id, + 'source_type': 'volume', + 'destination_type': 'volume' + }] + } + } + # Note that we're using v2.1 by default but need v2.37 to use + # networks='none'. + with utils.temporary_mutation(self.api, microversion='2.37'): + server = self.api.post_server(server_req_body) + server = self._wait_for_state_change(self.api, server, 'ACTIVE') + # NOTE(mriedem): The instance is successfully scheduled but since + # the image_id from the volume_image_metadata isn't stored in the + # RequestSpec.image.id, and restrict_isolated_hosts_to_isolated_images + # is True, the isolated host (host1) is filtered out because the + # filter doesn't have enough information to know if the image within + # the volume can be used on that host. + self.assertEqual('host2', server['OS-EXT-SRV-ATTR:host']) diff -Nru nova-16.1.0/nova/tests/functional/regressions/test_bug_1746509.py nova-16.1.2/nova/tests/functional/regressions/test_bug_1746509.py --- nova-16.1.0/nova/tests/functional/regressions/test_bug_1746509.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/regressions/test_bug_1746509.py 2018-04-25 09:22:25.000000000 +0000 @@ -0,0 +1,62 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import nova.context +from nova import db +from nova import objects +from nova import test + + +class InstanceListWithServicesTestCase(test.TestCase): + """Test the scenario listing instances whose services may not have a UUID. + + We added a UUID field to the 'services' table in Pike, so after an upgrade, + we generate a UUID and save it to the service record upon access if it's an + older record that does not already have a UUID. + + Now, when we list instances through the API, the instances query is joined + with the 'services' table in order to provide service-related information. + The bug in Pike was that we were already inside of a 'reader' database + transaction context when we tried to compile the instance list, which + eventually led to an error: + + TypeError: Can't upgrade a READER transaction to a WRITER mid-transaction + + because we also tried to write the newly generated service UUID to the + service record while still nested under the 'reader' context. + + This test verifies that we are able to list instances joined with services + even if the associated service record does not yet have a UUID. + """ + def setUp(self): + super(InstanceListWithServicesTestCase, self).setUp() + self.context = nova.context.RequestContext('fake-user', 'fake-project') + + def test_instance_list_service_with_no_uuid(self): + # Create a nova-compute service record with a host that will match the + # instance's host, with no uuid. We can't do this through the + # Service object because it will automatically generate a uuid. + service = db.service_create(self.context, {'host': 'fake-host', + 'binary': 'nova-compute'}) + self.assertIsNone(service['uuid']) + + # Create an instance whose host will match the service with no uuid + inst = objects.Instance(context=self.context, + project_id=self.context.project_id, + host='fake-host') + inst.create() + + insts = objects.InstanceList.get_by_filters( + self.context, {}, expected_attrs=['services']) + self.assertEqual(1, len(insts)) + self.assertEqual(1, len(insts[0].services)) + self.assertIsNotNone(insts[0].services[0].uuid) diff -Nru nova-16.1.0/nova/tests/functional/test_list_servers_ip_filter.py nova-16.1.2/nova/tests/functional/test_list_servers_ip_filter.py --- nova-16.1.0/nova/tests/functional/test_list_servers_ip_filter.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/test_list_servers_ip_filter.py 2018-04-25 09:22:33.000000000 +0000 @@ -37,6 +37,8 @@ # the image fake backend needed for image discovery nova.tests.unit.image.fake.stub_out_image_service(self) + self.useFixture(nova_fixtures.PlacementFixture()) + self.start_service('conductor') # Use the chance scheduler to bypass filtering and just pick the single # compute host that we have. @@ -46,7 +48,6 @@ self.start_service('consoleauth') self.useFixture(cast_as_call.CastAsCall(self)) - self.useFixture(nova_fixtures.PlacementFixture()) self.image_id = self.api.get_images()[0]['id'] self.flavor_id = self.api.get_flavors()[0]['id'] diff -Nru nova-16.1.0/nova/tests/functional/test_servers.py nova-16.1.2/nova/tests/functional/test_servers.py --- nova-16.1.0/nova/tests/functional/test_servers.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/test_servers.py 2018-04-25 09:22:33.000000000 +0000 @@ -21,6 +21,7 @@ from oslo_log import log as logging from oslo_serialization import base64 from oslo_utils import timeutils +import six from nova.compute import api as compute_api from nova.compute import instance_actions @@ -1093,11 +1094,12 @@ self.flags(host='host2') self.compute2 = self.start_service('compute', host='host2') + # We hard-code from a fake image since we can't get images + # via the compute /images proxy API with microversion > 2.35. + original_image_ref = '155d900f-4e14-4e4c-a73d-069cbf4541e6' server_req_body = { 'server': { - # We hard-code from a fake image since we can't get images - # via the compute /images proxy API with microversion > 2.35. - 'imageRef': '155d900f-4e14-4e4c-a73d-069cbf4541e6', + 'imageRef': original_image_ref, 'flavorRef': '1', # m1.tiny from DefaultFlavorsFixture, 'name': 'test_rebuild_with_image_novalidhost', # We don't care about networking for this test. This requires @@ -1140,16 +1142,27 @@ # Before microversion 2.51 events are only returned for instance # actions if you're an admin. self.api_fixture.admin_api) - # Unfortunately the server's image_ref is updated to be the new image - # even though the rebuild should not work. + # Assert the server image_ref was rolled back on failure. server = self.api.get_server(server['id']) - self.assertEqual(rebuild_image_ref, server['image']['id']) + self.assertEqual(original_image_ref, server['image']['id']) # The server should be in ERROR state self.assertEqual('ERROR', server['status']) self.assertIn('No valid host', server['fault']['message']) - def test_rebuild_with_new_image(self): + # Rebuild it again with the same bad image to make sure it's rejected + # again. Since we're using CastAsCall here, there is no 202 from the + # API, and the exception from conductor gets passed back through the + # API. + ex = self.assertRaises( + client.OpenStackApiException, self.api.api_post, + '/servers/%s/action' % server['id'], rebuild_req_body) + self.assertIn('NoValidHost', six.text_type(ex)) + + # A rebuild to the same host should never attempt a rebuild claim. + @mock.patch('nova.compute.resource_tracker.ResourceTracker.rebuild_claim', + new_callable=mock.NonCallableMock) + def test_rebuild_with_new_image(self, mock_rebuild_claim): """Rebuilds a server with a different image which will run it through the scheduler to validate the image is still OK with the compute host that the instance is running on. @@ -2492,9 +2505,10 @@ source_usages = self._get_provider_usages(source_rp_uuid) self.assertFlavorMatchesAllocation(self.flavor1, source_usages) - def test_resize_to_same_host_prep_resize_fails(self): + def _test_resize_to_same_host_instance_fails(self, failing_method, + event_name): """Tests that when we resize to the same host and resize fails in - the prep_resize method, we cleanup the allocations before rescheduling. + the given method, we cleanup the allocations before rescheduling. """ # make sure that the test only uses a single host compute2_service_id = self.admin_api.get_services( @@ -2506,16 +2520,17 @@ server = self._boot_and_check_allocations(self.flavor1, hostname) - def fake_prep_resize(*args, **kwargs): + def fake_resize_method(*args, **kwargs): # Ensure the allocations are doubled now before we fail. usages = self._get_provider_usages(rp_uuid) self.assertFlavorsMatchAllocation( self.flavor1, self.flavor2, usages) - raise test.TestingException('Simulated _prep_resize failure.') + raise test.TestingException('Simulated resize failure.') # Yes this isn't great in a functional test, but it's simple. - self.stub_out('nova.compute.manager.ComputeManager._prep_resize', - fake_prep_resize) + self.stub_out( + 'nova.compute.manager.ComputeManager.%s' % failing_method, + fake_resize_method) self.flags(allow_resize_to_same_host=True) resize_req = { @@ -2526,7 +2541,7 @@ self.api.post_server_action(server['id'], resize_req) self._wait_for_action_fail_completion( - server, instance_actions.RESIZE, 'compute_prep_resize') + server, instance_actions.RESIZE, event_name) # Ensure the allocation records still exist on the host. source_rp_uuid = self._get_provider_uuid_by_host(hostname) @@ -2535,6 +2550,18 @@ # allocation which just leaves us with the original flavor. self.assertFlavorMatchesAllocation(self.flavor1, source_usages) + def test_resize_to_same_host_prep_resize_fails(self): + self._test_resize_to_same_host_instance_fails( + '_prep_resize', 'compute_prep_resize') + + def test_resize_instance_fails_allocation_cleanup(self): + self._test_resize_to_same_host_instance_fails( + '_resize_instance', 'compute_resize_instance') + + def test_finish_resize_fails_allocation_cleanup(self): + self._test_resize_to_same_host_instance_fails( + '_finish_resize', 'compute_finish_resize') + def _mock_live_migration(self, context, instance, dest, post_method, recover_method, block_migration=False, migrate_data=None): diff -Nru nova-16.1.0/nova/tests/functional/wsgi/test_servers.py nova-16.1.2/nova/tests/functional/wsgi/test_servers.py --- nova-16.1.0/nova/tests/functional/wsgi/test_servers.py 2018-02-15 23:54:41.000000000 +0000 +++ nova-16.1.2/nova/tests/functional/wsgi/test_servers.py 2018-04-25 09:22:33.000000000 +0000 @@ -10,6 +10,9 @@ # License for the specific language governing permissions and limitations # under the License. +import mock + +from nova.compute import api as compute_api from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.image import fake as fake_image @@ -237,3 +240,38 @@ 'servers/detail?not-tags-any=tag1,tag3') list_resp = list_resp.body['servers'] self.assertEqual(0, len(list_resp)) + + @mock.patch('nova.objects.service.get_minimum_version_all_cells', + return_value=compute_api.BFV_RESERVE_MIN_COMPUTE_VERSION) + def test_bfv_delete_build_request_pre_scheduling_ocata(self, mock_get): + cinder = self.useFixture(nova_fixtures.CinderFixture(self)) + + volume_id = nova_fixtures.CinderFixture.IMAGE_BACKED_VOL + server = self.api.post_server({ + 'server': { + 'flavorRef': '1', + 'name': 'test_bfv_delete_build_request_pre_scheduling', + 'networks': 'none', + 'block_device_mapping_v2': [ + { + 'boot_index': 0, + 'uuid': volume_id, + 'source_type': 'volume', + 'destination_type': 'volume' + }, + ] + } + }) + + # Since _IntegratedTestBase uses the CastAsCall fixture, when we + # get the server back we know all of the volume stuff should be done. + self.assertIn(volume_id, cinder.reserved_volumes) + + # Now delete the server, which should go through the "local delete" + # code in the API, find the build request and delete it along with + # detaching the volume from the instance. + self.api.delete_server(server['id']) + + # The volume should no longer have any attachments as instance delete + # should have removed them. + self.assertNotIn(volume_id, cinder.reserved_volumes) diff -Nru nova-16.1.0/nova/tests/unit/api/openstack/compute/test_migrate_server.py nova-16.1.2/nova/tests/unit/api/openstack/compute/test_migrate_server.py --- nova-16.1.0/nova/tests/unit/api/openstack/compute/test_migrate_server.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/api/openstack/compute/test_migrate_server.py 2018-04-25 09:22:33.000000000 +0000 @@ -421,8 +421,27 @@ def test_migrate_live_migration_with_old_nova_not_supported(self): pass + def test_migrate_live_compute_host_not_found(self): + exc = exception.ComputeHostNotFound( + reason="Compute host %(host)s could not be found.", + host='hostname') + self.mox.StubOutWithMock(self.compute_api, 'live_migrate') + instance = self._stub_instance_get() + self.compute_api.live_migrate(self.context, instance, None, + self.disk_over_commit, 'hostname', + self.force, self.async).AndRaise(exc) + + self.mox.ReplayAll() + body = {'os-migrateLive': + {'host': 'hostname', 'block_migration': 'auto'}} + + self.assertRaises(webob.exc.HTTPBadRequest, + self.controller._migrate_live, + self.req, instance.uuid, body=body) + def test_migrate_live_unexpected_error(self): - exc = exception.NoValidHost(reason="No valid host found") + exc = exception.InvalidHypervisorType( + reason="The supplied hypervisor type of is invalid.") self.mox.StubOutWithMock(self.compute_api, 'live_migrate') instance = self._stub_instance_get() self.compute_api.live_migrate(self.context, instance, None, diff -Nru nova-16.1.0/nova/tests/unit/api/openstack/placement/test_deploy.py nova-16.1.2/nova/tests/unit/api/openstack/placement/test_deploy.py --- nova-16.1.0/nova/tests/unit/api/openstack/placement/test_deploy.py 1970-01-01 00:00:00.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/api/openstack/placement/test_deploy.py 2018-04-25 09:22:33.000000000 +0000 @@ -0,0 +1,43 @@ +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +"""Unit tests for the deply function used to build the Placement service.""" + +from oslo_config import cfg +import webob + +from nova.api.openstack.placement import deploy +from nova import test + + +CONF = cfg.CONF + + +class DeployTest(test.NoDBTestCase): + + def test_auth_middleware_factory(self): + """Make sure that configuration settings make their way to + the keystone middleware correctly. + """ + auth_uri = 'http://example.com/identity' + authenticate_header_value = "Keystone uri='%s'" % auth_uri + self.flags(auth_uri=auth_uri, group='keystone_authtoken') + # ensure that the auth_token middleware is chosen + self.flags(auth_strategy='keystone', group='api') + app = deploy.deploy(CONF, 'nova') + req = webob.Request.blank('/', method="GET") + + response = req.get_response(app) + + self.assertEqual(authenticate_header_value, + response.headers['www-authenticate']) diff -Nru nova-16.1.0/nova/tests/unit/compute/test_compute_api.py nova-16.1.2/nova/tests/unit/compute/test_compute_api.py --- nova-16.1.0/nova/tests/unit/compute/test_compute_api.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/compute/test_compute_api.py 2018-04-25 09:22:33.000000000 +0000 @@ -952,7 +952,7 @@ if self.cell_type != 'api': if inst.vm_state == vm_states.RESIZED: self._test_delete_resized_part(inst) - if inst.vm_state != vm_states.SHELVED_OFFLOADED: + if inst.host is not None: self.context.elevated().AndReturn(self.context) objects.Service.get_by_compute_host(self.context, inst.host).AndReturn(objects.Service()) @@ -960,9 +960,7 @@ mox.IsA(objects.Service)).AndReturn( inst.host != 'down-host') - if (inst.host == 'down-host' or - inst.vm_state == vm_states.SHELVED_OFFLOADED): - + if inst.host == 'down-host' or inst.host is None: self._test_downed_host_part(inst, updates, delete_time, delete_type) cast = False @@ -1052,6 +1050,81 @@ system_metadata=fake_sys_meta) self._test_delete('force_delete', vm_state=vm_state) + def test_delete_forced_when_task_state_is_not_none(self): + for vm_state in self._get_vm_states(): + self._test_delete('force_delete', vm_state=vm_state, + task_state=task_states.RESIZE_MIGRATING) + + @mock.patch('nova.compute.api.API._delete_while_booting', + return_value=False) + @mock.patch('nova.compute.api.API._lookup_instance') + @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') + @mock.patch('nova.objects.Instance.save') + @mock.patch('nova.compute.utils.notify_about_instance_usage') + @mock.patch('nova.objects.Service.get_by_compute_host') + @mock.patch('nova.compute.api.API._local_delete') + def test_delete_error_state_with_no_host( + self, mock_local_delete, mock_service_get, _mock_notify, + _mock_save, mock_bdm_get, mock_lookup, _mock_del_booting): + # Instance in error state with no host should be a local delete + # for non API cells + inst = self._create_instance_obj(params=dict(vm_state=vm_states.ERROR, + host=None)) + mock_lookup.return_value = None, inst + with mock.patch.object(self.compute_api.compute_rpcapi, + 'terminate_instance') as mock_terminate: + self.compute_api.delete(self.context, inst) + if self.cell_type == 'api': + mock_terminate.assert_called_once_with( + self.context, inst, mock_bdm_get.return_value, + delete_type='delete') + mock_local_delete.assert_not_called() + else: + mock_local_delete.assert_called_once_with( + self.context, inst, mock_bdm_get.return_value, + 'delete', self.compute_api._do_delete) + mock_terminate.assert_not_called() + mock_service_get.assert_not_called() + + @mock.patch('nova.compute.api.API._delete_while_booting', + return_value=False) + @mock.patch('nova.compute.api.API._lookup_instance') + @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') + @mock.patch('nova.objects.Instance.save') + @mock.patch('nova.compute.utils.notify_about_instance_usage') + @mock.patch('nova.objects.Service.get_by_compute_host') + @mock.patch('nova.context.RequestContext.elevated') + @mock.patch('nova.servicegroup.api.API.service_is_up', return_value=True) + @mock.patch('nova.compute.api.API._record_action_start') + @mock.patch('nova.compute.api.API._local_delete') + def test_delete_error_state_with_host_set( + self, mock_local_delete, _mock_record, mock_service_up, + mock_elevated, mock_service_get, _mock_notify, _mock_save, + mock_bdm_get, mock_lookup, _mock_del_booting): + # Instance in error state with host set should be a non-local delete + # for non API cells if the service is up + inst = self._create_instance_obj(params=dict(vm_state=vm_states.ERROR, + host='fake-host')) + mock_lookup.return_value = inst + with mock.patch.object(self.compute_api.compute_rpcapi, + 'terminate_instance') as mock_terminate: + self.compute_api.delete(self.context, inst) + if self.cell_type == 'api': + mock_terminate.assert_called_once_with( + self.context, inst, mock_bdm_get.return_value, + delete_type='delete') + mock_local_delete.assert_not_called() + mock_service_get.assert_not_called() + else: + mock_service_get.assert_called_once_with( + mock_elevated.return_value, 'fake-host') + mock_service_up.assert_called_once_with( + mock_service_get.return_value) + mock_terminate.assert_called_once_with( + self.context, inst, mock_bdm_get.return_value, + delete_type='delete') + mock_local_delete.assert_not_called() + def test_delete_fast_if_host_not_set(self): self.useFixture(fixtures.AllServicesCurrent()) inst = self._create_instance_obj() @@ -1257,6 +1330,52 @@ self.assertIsNone( self.compute_api._get_stashed_volume_connector(bdm, inst)) + @mock.patch.object(objects.BlockDeviceMapping, 'destroy') + def test_local_cleanup_bdm_volumes_stashed_connector_host_none( + self, mock_destroy): + """Tests that we call volume_api.terminate_connection when we found + a stashed connector in the bdm.connection_info dict. + + This tests the case where: + + 1) the instance host is None + 2) the instance vm_state is one where we expect host to be None + + We allow a mismatch of the host in this situation if the instance is + in a state where we expect its host to have been set to None, such + as ERROR or SHELVED_OFFLOADED. + """ + params = dict(host=None, vm_state=vm_states.ERROR) + inst = self._create_instance_obj(params=params) + conn_info = {'connector': {'host': 'orig-host'}} + vol_bdm = objects.BlockDeviceMapping(self.context, id=1, + instance_uuid=inst.uuid, + volume_id=uuids.volume_id, + source_type='volume', + destination_type='volume', + delete_on_termination=True, + connection_info=jsonutils.dumps( + conn_info), + attachment_id=None) + bdms = objects.BlockDeviceMappingList(objects=[vol_bdm]) + + @mock.patch.object(self.compute_api.volume_api, 'terminate_connection') + @mock.patch.object(self.compute_api.volume_api, 'detach') + @mock.patch.object(self.compute_api.volume_api, 'delete') + @mock.patch.object(self.context, 'elevated', return_value=self.context) + def do_test(self, mock_elevated, mock_delete, + mock_detach, mock_terminate): + self.compute_api._local_cleanup_bdm_volumes( + bdms, inst, self.context) + mock_terminate.assert_called_once_with( + self.context, uuids.volume_id, conn_info['connector']) + mock_detach.assert_called_once_with( + self.context, uuids.volume_id, inst.uuid) + mock_delete.assert_called_once_with(self.context, uuids.volume_id) + mock_destroy.assert_called_once_with() + + do_test(self) + def test_local_delete_without_info_cache(self): inst = self._create_instance_obj() @@ -2213,6 +2332,12 @@ self._test_swap_volume_for_precheck_with_exception( exception.InstanceInvalidState, instance_update={'vm_state': vm_states.BUILDING}) + self._test_swap_volume_for_precheck_with_exception( + exception.InstanceInvalidState, + instance_update={'vm_state': vm_states.STOPPED}) + self._test_swap_volume_for_precheck_with_exception( + exception.InstanceInvalidState, + instance_update={'vm_state': vm_states.SUSPENDED}) def test_swap_volume_with_another_server_volume(self): # Should fail if old volume's instance_uuid is not that of the instance @@ -2624,7 +2749,8 @@ instance) def _test_snapshot_volume_backed(self, quiesce_required, quiesce_fails, - vm_state=vm_states.ACTIVE): + vm_state=vm_states.ACTIVE, + snapshot_fails=False): fake_sys_meta = {'image_min_ram': '11', 'image_min_disk': '22', 'image_container_format': 'ami', @@ -2670,6 +2796,8 @@ return {'id': volume_id, 'display_description': ''} def fake_volume_create_snapshot(context, volume_id, name, description): + if snapshot_fails: + raise exception.OverQuota(overs="snapshots") return {'id': '%s-snapshot' % volume_id} def fake_quiesce_instance(context, instance): @@ -2719,8 +2847,13 @@ 'tag': None}) # All the db_only fields and the volume ones are removed - self.compute_api.snapshot_volume_backed( - self.context, instance, 'test-snapshot') + if snapshot_fails: + self.assertRaises(exception.OverQuota, + self.compute_api.snapshot_volume_backed, + self.context, instance, "test-snapshot") + else: + self.compute_api.snapshot_volume_backed( + self.context, instance, 'test-snapshot') self.assertEqual(quiesce_expected, quiesced[0]) self.assertEqual(quiesce_expected, quiesced[1]) @@ -2758,8 +2891,13 @@ quiesced = [False, False] # Check that the mappings from the image properties are not included - self.compute_api.snapshot_volume_backed( - self.context, instance, 'test-snapshot') + if snapshot_fails: + self.assertRaises(exception.OverQuota, + self.compute_api.snapshot_volume_backed, + self.context, instance, "test-snapshot") + else: + self.compute_api.snapshot_volume_backed( + self.context, instance, 'test-snapshot') self.assertEqual(quiesce_expected, quiesced[0]) self.assertEqual(quiesce_expected, quiesced[1]) @@ -2770,6 +2908,11 @@ def test_snapshot_volume_backed_with_quiesce(self): self._test_snapshot_volume_backed(True, False) + def test_snapshot_volume_backed_with_quiesce_create_snap_fails(self): + self._test_snapshot_volume_backed(quiesce_required=True, + quiesce_fails=False, + snapshot_fails=True) + def test_snapshot_volume_backed_with_quiesce_skipped(self): self._test_snapshot_volume_backed(False, True) @@ -3492,6 +3635,46 @@ @mock.patch.object(objects.Service, 'get_minimum_version', return_value=17) @mock.patch.object(cinder.API, 'get') + @mock.patch.object(cinder.API, 'reserve_volume') + def test_validate_bdm_returns_attachment_id(self, mock_reserve_volume, + mock_get, mock_get_min_ver, + mock_get_min_ver_all): + # Tests that bdm validation *always* returns an attachment_id even if + # it's None. + instance = self._create_instance_obj() + instance_type = self._create_flavor() + volume_id = 'e856840e-9f5b-4894-8bde-58c6e29ac1e8' + volume_info = {'status': 'available', + 'attach_status': 'detached', + 'id': volume_id, + 'multiattach': False} + mock_get.return_value = volume_info + + # NOTE(mnaser): We use the AnonFakeDbBlockDeviceDict to make sure that + # the attachment_id field does not get any defaults to + # properly test this function. + bdms = [objects.BlockDeviceMapping( + **fake_block_device.AnonFakeDbBlockDeviceDict( + { + 'boot_index': 0, + 'volume_id': volume_id, + 'source_type': 'volume', + 'destination_type': 'volume', + 'device_name': 'vda', + }))] + self.compute_api._validate_bdm(self.context, instance, instance_type, + bdms) + self.assertIsNone(bdms[0].attachment_id) + + mock_get.assert_called_once_with(self.context, volume_id) + mock_reserve_volume.assert_called_once_with( + self.context, volume_id) + + @mock.patch.object(objects.service, 'get_minimum_version_all_cells', + return_value=17) + @mock.patch.object(objects.Service, 'get_minimum_version', + return_value=17) + @mock.patch.object(cinder.API, 'get') @mock.patch.object(cinder.API, 'reserve_volume', side_effect=exception.InvalidInput(reason='error')) def test_validate_bdm_with_error_volume(self, mock_reserve_volume, @@ -5242,6 +5425,57 @@ network_id=None, port_id=None, requested_ip=None, tag='foo') + @mock.patch('nova.compute.api.API._delete_while_booting', + return_value=False) + @mock.patch('nova.compute.api.API._lookup_instance') + @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') + @mock.patch('nova.context.RequestContext.elevated') + @mock.patch.object(objects.Instance, 'save') + @mock.patch.object(compute_utils, 'notify_about_instance_usage') + @mock.patch.object(objects.BlockDeviceMapping, 'destroy') + @mock.patch.object(objects.Instance, 'destroy') + def _test_delete_volume_backed_instance( + self, vm_state, mock_instance_destroy, bdm_destroy, + notify_about_instance_usage, mock_save, mock_elevated, + bdm_get_by_instance_uuid, mock_lookup, _mock_del_booting): + volume_id = uuidutils.generate_uuid() + conn_info = {'connector': {'host': 'orig-host'}} + bdms = [objects.BlockDeviceMapping( + **fake_block_device.FakeDbBlockDeviceDict( + {'id': 42, 'volume_id': volume_id, + 'source_type': 'volume', 'destination_type': 'volume', + 'delete_on_termination': False, + 'connection_info': jsonutils.dumps(conn_info)}))] + + bdm_get_by_instance_uuid.return_value = bdms + mock_elevated.return_value = self.context + + params = {'host': None, 'vm_state': vm_state} + inst = self._create_instance_obj(params=params) + mock_lookup.return_value = None, inst + connector = conn_info['connector'] + + with mock.patch.object(self.compute_api.network_api, + 'deallocate_for_instance') as mock_deallocate, \ + mock.patch.object(self.compute_api.volume_api, + 'terminate_connection') as mock_terminate_conn, \ + mock.patch.object(self.compute_api.volume_api, + 'detach') as mock_detach: + self.compute_api.delete(self.context, inst) + + mock_deallocate.assert_called_once_with(self.context, inst) + mock_detach.assert_called_once_with(self.context, volume_id, + inst.uuid) + mock_terminate_conn.assert_called_once_with(self.context, + volume_id, connector) + bdm_destroy.assert_called_once_with() + + def test_delete_volume_backed_instance_in_error(self): + self._test_delete_volume_backed_instance(vm_states.ERROR) + + def test_delete_volume_backed_instance_in_shelved_offloaded(self): + self._test_delete_volume_backed_instance(vm_states.SHELVED_OFFLOADED) + class ComputeAPIAPICellUnitTestCase(_ComputeAPIUnitTestMixIn, test.NoDBTestCase): diff -Nru nova-16.1.0/nova/tests/unit/compute/test_compute_mgr.py nova-16.1.2/nova/tests/unit/compute/test_compute_mgr.py --- nova-16.1.0/nova/tests/unit/compute/test_compute_mgr.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/compute/test_compute_mgr.py 2018-04-25 09:22:33.000000000 +0000 @@ -272,7 +272,10 @@ self.assertFalse(mock_log.error.called) @mock.patch('nova.compute.utils.notify_about_instance_action') - def test_delete_instance_without_info_cache(self, mock_notify): + @mock.patch('nova.compute.manager.ComputeManager.' + '_detach_volume') + def test_delete_instance_without_info_cache(self, mock_detach, + mock_notify): instance = fake_instance.fake_instance_obj( self.context, uuid=uuids.instance, @@ -1952,8 +1955,9 @@ self.assertTrue(uuidutils.is_uuid_like(volume)) return {} - def _assert_swap_volume(self, old_connection_info, new_connection_info, - instance, mountpoint, resize_to): + def _assert_swap_volume(self, context, old_connection_info, + new_connection_info, instance, mountpoint, + resize_to): self.assertEqual(2, resize_to) @mock.patch.object(cinder.API, 'initialize_connection') @@ -2270,7 +2274,7 @@ instance, uuids.new_attachment_id) # Assert the expected calls. # The new connection_info has the new_volume_id as the serial. - new_cinfo = mock_driver_swap.call_args[0][1] + new_cinfo = mock_driver_swap.call_args[0][2] self.assertIn('serial', new_cinfo) self.assertEqual(uuids.new_volume_id, new_cinfo['serial']) get_bdm.assert_called_once_with( @@ -3533,7 +3537,9 @@ self.assertRaises(test.TestingException, do_test) set_error.assert_called_once_with(self.context, instance) - def test_cleanup_volumes(self): + @mock.patch('nova.compute.manager.ComputeManager.' + '_detach_volume') + def test_cleanup_volumes(self, mock_detach): instance = fake_instance.fake_instance_obj(self.context) bdm_do_not_delete_dict = fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 'fake-id1', 'source_type': 'image', @@ -3546,11 +3552,17 @@ with mock.patch.object(self.compute.volume_api, 'delete') as volume_delete: - self.compute._cleanup_volumes(self.context, instance.uuid, bdms) + self.compute._cleanup_volumes(self.context, instance, bdms) + calls = [mock.call(self.context, bdm, instance, + destroy_bdm=bdm.delete_on_termination) + for bdm in bdms] + self.assertEqual(calls, mock_detach.call_args_list) volume_delete.assert_called_once_with(self.context, bdms[1].volume_id) - def test_cleanup_volumes_exception_do_not_raise(self): + @mock.patch('nova.compute.manager.ComputeManager.' + '_detach_volume') + def test_cleanup_volumes_exception_do_not_raise(self, mock_detach): instance = fake_instance.fake_instance_obj(self.context) bdm_dict1 = fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 'fake-id1', 'source_type': 'image', @@ -3564,12 +3576,17 @@ with mock.patch.object(self.compute.volume_api, 'delete', side_effect=[test.TestingException(), None]) as volume_delete: - self.compute._cleanup_volumes(self.context, instance.uuid, bdms, + self.compute._cleanup_volumes(self.context, instance, bdms, raise_exc=False) calls = [mock.call(self.context, bdm.volume_id) for bdm in bdms] self.assertEqual(calls, volume_delete.call_args_list) + calls = [mock.call(self.context, bdm, instance, + destroy_bdm=True) for bdm in bdms] + self.assertEqual(calls, mock_detach.call_args_list) - def test_cleanup_volumes_exception_raise(self): + @mock.patch('nova.compute.manager.ComputeManager.' + '_detach_volume') + def test_cleanup_volumes_exception_raise(self, mock_detach): instance = fake_instance.fake_instance_obj(self.context) bdm_dict1 = fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 'fake-id1', 'source_type': 'image', @@ -3584,10 +3601,31 @@ 'delete', side_effect=[test.TestingException(), None]) as volume_delete: self.assertRaises(test.TestingException, - self.compute._cleanup_volumes, self.context, instance.uuid, + self.compute._cleanup_volumes, self.context, instance, bdms) calls = [mock.call(self.context, bdm.volume_id) for bdm in bdms] self.assertEqual(calls, volume_delete.call_args_list) + calls = [mock.call(self.context, bdm, instance, + destroy_bdm=bdm.delete_on_termination) + for bdm in bdms] + self.assertEqual(calls, mock_detach.call_args_list) + + @mock.patch('nova.compute.manager.ComputeManager._detach_volume', + side_effect=exception.CinderConnectionFailed(reason='idk')) + def test_cleanup_volumes_detach_fails_raise_exc(self, mock_detach): + instance = fake_instance.fake_instance_obj(self.context) + bdms = block_device_obj.block_device_make_list( + self.context, + [fake_block_device.FakeDbBlockDeviceDict( + {'volume_id': uuids.volume_id, + 'source_type': 'volume', + 'destination_type': 'volume', + 'delete_on_termination': False})]) + self.assertRaises(exception.CinderConnectionFailed, + self.compute._cleanup_volumes, self.context, + instance, bdms) + mock_detach.assert_called_once_with( + self.context, bdms[0], instance, destroy_bdm=False) def test_stop_instance_task_state_none_power_state_shutdown(self): # Tests that stop_instance doesn't puke when the instance power_state @@ -4258,7 +4296,7 @@ mock_clean_net.assert_called_once_with(self.context, self.instance, self.requested_networks) mock_clean_vol.assert_called_once_with(self.context, - self.instance.uuid, self.block_device_mapping, raise_exc=False) + self.instance, self.block_device_mapping, raise_exc=False) mock_add.assert_called_once_with(self.context, self.instance, mock.ANY, mock.ANY) mock_nil.assert_called_once_with(self.instance) @@ -4278,6 +4316,7 @@ def test_rescheduled_exception(self, mock_hooks, mock_build_run, mock_build, mock_set, mock_nil, mock_save, mock_start, mock_finish): + self.flags(use_neutron=False) self._do_build_instance_update(mock_save, reschedule_update=True) mock_build_run.side_effect = exception.RescheduledException(reason='', instance_uuid=self.instance.uuid) @@ -4361,6 +4400,7 @@ mock_macs_for_instance, mock_event_finish, mock_event_start, mock_ins_save, mock_build_ins, mock_build_and_run): + self.flags(use_neutron=False) instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, system_metadata={'network_allocated': 'True'}, @@ -4397,6 +4437,53 @@ self.security_groups, self.block_device_mapping) @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') + @mock.patch.object(manager.ComputeManager, '_cleanup_allocated_networks') + @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') + @mock.patch.object(objects.Instance, 'save') + @mock.patch.object(objects.InstanceActionEvent, 'event_start') + @mock.patch.object(objects.InstanceActionEvent, + 'event_finish_with_failure') + @mock.patch.object(virt_driver.ComputeDriver, 'macs_for_instance') + def test_rescheduled_exception_with_network_allocated_with_neutron(self, + mock_macs_for_instance, mock_event_finish, mock_event_start, + mock_ins_save, mock_build_ins, mock_cleanup_network, + mock_build_and_run): + """Tests that we always cleanup allocated networks for the instance + when using neutron and before we reschedule off the failed host. + """ + instance = fake_instance.fake_instance_obj(self.context, + vm_state=vm_states.ACTIVE, + system_metadata={'network_allocated': 'True'}, + expected_attrs=['metadata', 'system_metadata', 'info_cache']) + mock_ins_save.return_value = instance + mock_macs_for_instance.return_value = [] + mock_build_and_run.side_effect = exception.RescheduledException( + reason='', instance_uuid=self.instance.uuid) + + self.compute._do_build_and_run_instance(self.context, instance, + self.image, request_spec={}, + filter_properties=self.filter_properties, + injected_files=self.injected_files, + admin_password=self.admin_pass, + requested_networks=self.requested_networks, + security_groups=self.security_groups, + block_device_mapping=self.block_device_mapping, node=self.node, + limits=self.limits) + + mock_build_and_run.assert_called_once_with(self.context, + instance, + self.image, self.injected_files, self.admin_pass, + self.requested_networks, self.security_groups, + self.block_device_mapping, self.node, self.limits, + self.filter_properties) + mock_cleanup_network.assert_called_once_with( + self.context, instance, self.requested_networks) + mock_build_ins.assert_called_once_with(self.context, + [instance], self.image, self.filter_properties, + self.admin_pass, self.injected_files, self.requested_networks, + self.security_groups, self.block_device_mapping) + + @mock.patch.object(manager.ComputeManager, '_build_and_run_instance') @mock.patch.object(conductor_api.ComputeTaskAPI, 'build_instances') @mock.patch.object(manager.ComputeManager, '_cleanup_allocated_networks') @mock.patch.object(objects.Instance, 'save') @@ -4491,7 +4578,7 @@ mock_clean_net.assert_called_once_with(self.context, self.instance, self.requested_networks) mock_clean_vol.assert_called_once_with(self.context, - self.instance.uuid, self.block_device_mapping, + self.instance, self.block_device_mapping, raise_exc=False) mock_add.assert_called_once_with(self.context, self.instance, mock.ANY, mock.ANY, fault_message=mock.ANY) @@ -4515,6 +4602,7 @@ mock_build_run, mock_build, mock_deallocate, mock_nil, mock_clean_net, mock_save, mock_start, mock_finish): + self.flags(use_neutron=False) self._do_build_instance_update(mock_save, reschedule_update=True) mock_build_run.side_effect = exception.RescheduledException(reason='', instance_uuid=self.instance.uuid) @@ -4636,7 +4724,7 @@ self._assert_build_instance_update(mock_save) if cleanup_volumes: mock_clean_vol.assert_called_once_with(self.context, - self.instance.uuid, self.block_device_mapping, + self.instance, self.block_device_mapping, raise_exc=False) if nil_out_host_and_node: mock_nil.assert_called_once_with(self.instance) @@ -4955,6 +5043,7 @@ def test_reschedule_on_resources_unavailable(self, mock_claim, mock_build, mock_nil, mock_save, mock_start, mock_finish, mock_notify): + self.flags(use_neutron=False) reason = 'resource unavailable' exc = exception.ComputeResourcesUnavailable(reason=reason) mock_claim.side_effect = exc @@ -5540,7 +5629,8 @@ vm_state=vm_states.ACTIVE, expected_attrs=['metadata', 'system_metadata', 'info_cache']) self.migration = objects.Migration(context=self.context.elevated(), - new_instance_type_id=7) + new_instance_type_id=7, + uuid=mock.sentinel.uuid) self.migration.status = 'migrating' self.useFixture(fixtures.SpawnIsSynchronousFixture()) self.useFixture(fixtures.EventReporterStub()) @@ -5554,9 +5644,11 @@ mock.patch.object(self.instance, 'save'), mock.patch.object(self.migration, 'save'), mock.patch.object(self.migration, 'obj_as_admin', - return_value=mock.MagicMock()) + return_value=mock.MagicMock()), + mock.patch('nova.compute.resource_tracker.ResourceTracker.' + 'delete_allocation_for_failed_resize') ) as (meth, fault_create, instance_update, instance_save, - migration_save, migration_obj_as_admin): + migration_save, migration_obj_as_admin, delete_alloc): fault_create.return_value = ( test_instance_fault.fake_faults['fake-uuid'][0]) self.assertRaises( @@ -5568,6 +5660,8 @@ self.assertEqual("error", self.migration.status) migration_save.assert_called_once_with() migration_obj_as_admin.assert_called_once_with() + delete_alloc.assert_called_once_with( + self.instance, 'fake-mini', self.instance.new_flavor) def test_resize_instance_failure(self): self.migration.dest_host = None @@ -5592,10 +5686,12 @@ return_value=None), mock.patch.object(objects.Flavor, 'get_by_id', - return_value=None) + return_value=None), + mock.patch('nova.compute.resource_tracker.ResourceTracker.' + 'delete_allocation_for_failed_resize') ) as (meth, fault_create, instance_update, migration_save, migration_obj_as_admin, nw_info, save_inst, - notify, vol_block_info, bdm, flavor): + notify, vol_block_info, bdm, flavor, delete_alloc): fault_create.return_value = ( test_instance_fault.fake_faults['fake-uuid'][0]) self.assertRaises( @@ -5608,6 +5704,8 @@ migration_save.mock_calls) self.assertEqual([mock.call(), mock.call()], migration_obj_as_admin.mock_calls) + delete_alloc.assert_called_once_with( + self.instance, 'fake-mini', 'type') def _test_revert_resize_instance_destroy_disks(self, is_shared=False): diff -Nru nova-16.1.0/nova/tests/unit/compute/test_compute.py nova-16.1.2/nova/tests/unit/compute/test_compute.py --- nova-16.1.0/nova/tests/unit/compute/test_compute.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/compute/test_compute.py 2018-04-25 09:22:33.000000000 +0000 @@ -3526,6 +3526,74 @@ rotation=1) self.assertEqual(2, mock_delete.call_count) + @mock.patch('nova.image.api.API.get_all') + def test_rotate_backups_with_image_delete_failed(self, + mock_get_all_images): + instance = self._create_fake_instance_obj() + instance_uuid = instance['uuid'] + fake_images = [{ + 'id': uuids.image_id_1, + 'created_at': timeutils.parse_strtime('2017-01-04T00:00:00.00'), + 'name': 'fake_name_1', + 'status': 'active', + 'properties': {'kernel_id': uuids.kernel_id_1, + 'ramdisk_id': uuids.ramdisk_id_1, + 'image_type': 'backup', + 'backup_type': 'daily', + 'instance_uuid': instance_uuid}, + }, + { + 'id': uuids.image_id_2, + 'created_at': timeutils.parse_strtime('2017-01-03T00:00:00.00'), + 'name': 'fake_name_2', + 'status': 'active', + 'properties': {'kernel_id': uuids.kernel_id_2, + 'ramdisk_id': uuids.ramdisk_id_2, + 'image_type': 'backup', + 'backup_type': 'daily', + 'instance_uuid': instance_uuid}, + }, + { + 'id': uuids.image_id_3, + 'created_at': timeutils.parse_strtime('2017-01-02T00:00:00.00'), + 'name': 'fake_name_3', + 'status': 'active', + 'properties': {'kernel_id': uuids.kernel_id_3, + 'ramdisk_id': uuids.ramdisk_id_3, + 'image_type': 'backup', + 'backup_type': 'daily', + 'instance_uuid': instance_uuid}, + }, + { + 'id': uuids.image_id_4, + 'created_at': timeutils.parse_strtime('2017-01-01T00:00:00.00'), + 'name': 'fake_name_4', + 'status': 'active', + 'properties': {'kernel_id': uuids.kernel_id_4, + 'ramdisk_id': uuids.ramdisk_id_4, + 'image_type': 'backup', + 'backup_type': 'daily', + 'instance_uuid': instance_uuid}, + }] + + mock_get_all_images.return_value = fake_images + + def _check_image_id(context, image_id): + self.assertIn(image_id, [uuids.image_id_2, uuids.image_id_3, + uuids.image_id_4]) + if image_id == uuids.image_id_3: + raise Exception('fake %s delete exception' % image_id) + if image_id == uuids.image_id_4: + raise exception.ImageDeleteConflict(reason='image is in use') + + with mock.patch.object(nova.image.api.API, 'delete', + side_effect=_check_image_id) as mock_delete: + # Fake images 4,3,2 should be rotated in sequence + self.compute._rotate_backups(self.context, instance=instance, + backup_type='daily', + rotation=1) + self.assertEqual(3, mock_delete.call_count) + def test_console_output(self): # Make sure we can get console output from instance. instance = self._create_fake_instance_obj() @@ -4402,8 +4470,15 @@ func = getattr(self.compute, operation) - self.assertRaises(test.TestingException, - func, self.context, instance=instance, **kwargs) + with mock.patch('nova.compute.resource_tracker.ResourceTracker.' + 'delete_allocation_for_failed_resize') as delete_alloc: + self.assertRaises(test.TestingException, + func, self.context, instance=instance, **kwargs) + if operation == 'resize_instance': + delete_alloc.assert_called_once_with( + instance, 'fakenode1', kwargs['instance_type']) + else: + delete_alloc.assert_not_called() # self.context.elevated() is called in tearDown() self.stub_out('nova.context.RequestContext.elevated', orig_elevated) self.stub_out('nova.compute.manager.ComputeManager.' @@ -4419,6 +4494,7 @@ # ensure that task_state is reverted after a failed operation. migration = objects.Migration(context=self.context.elevated()) migration.instance_uuid = 'b48316c5-71e8-45e4-9884-6c78055b9b13' + migration.uuid = mock.sentinel.uuid migration.new_instance_type_id = '1' instance_type = objects.Flavor() @@ -5108,7 +5184,9 @@ clean_shutdown=True) self.compute.terminate_instance(self.context, instance, [], []) - def test_resize_instance_driver_error(self): + @mock.patch('nova.compute.resource_tracker.ResourceTracker.' + 'delete_allocation_for_failed_resize') + def test_resize_instance_driver_error(self, delete_alloc): # Ensure instance status set to Error on resize error. def throw_up(*args, **kwargs): @@ -5148,7 +5226,9 @@ self.assertEqual(instance.vm_state, vm_states.ERROR) self.compute.terminate_instance(self.context, instance, [], []) - def test_resize_instance_driver_rollback(self): + @mock.patch('nova.compute.resource_tracker.ResourceTracker.' + 'delete_allocation_for_failed_resize') + def test_resize_instance_driver_rollback(self, delete_alloc): # Ensure instance status set to Running after rollback. def throw_up(*args, **kwargs): @@ -5771,7 +5851,9 @@ flavor_type = flavors.get_flavor_by_flavor_id(1) self.assertEqual(flavor_type['name'], 'm1.tiny') - def test_resize_instance_handles_migration_error(self): + @mock.patch('nova.compute.resource_tracker.ResourceTracker.' + 'delete_allocation_for_failed_resize') + def test_resize_instance_handles_migration_error(self, delete_alloc): # Ensure vm_state is ERROR when error occurs. def raise_migration_failure(*args): raise test.TestingException() @@ -6639,7 +6721,7 @@ mock_shutdown.assert_has_calls([ mock.call(ctxt, inst1, bdms, notify=False), mock.call(ctxt, inst2, bdms, notify=False)]) - mock_cleanup.assert_called_once_with(ctxt, inst2['uuid'], bdms) + mock_cleanup.assert_called_once_with(ctxt, inst2, bdms) mock_get_uuid.assert_has_calls([ mock.call(ctxt, inst1.uuid, use_slave=True), mock.call(ctxt, inst2.uuid, use_slave=True)]) diff -Nru nova-16.1.0/nova/tests/unit/compute/test_resource_tracker.py nova-16.1.2/nova/tests/unit/compute/test_resource_tracker.py --- nova-16.1.0/nova/tests/unit/compute/test_resource_tracker.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/compute/test_resource_tracker.py 2018-04-25 09:22:33.000000000 +0000 @@ -436,6 +436,7 @@ vd.get_inventory.side_effect = NotImplementedError vd.get_host_ip_addr.return_value = _NODENAME vd.estimate_instance_overhead.side_effect = estimate_overhead + vd.rebalances_nodes = False with test.nested( mock.patch('nova.scheduler.client.SchedulerClient', @@ -1012,6 +1013,39 @@ self.assertFalse(create_mock.called) self.assertTrue(update_mock.called) + @mock.patch('nova.objects.ComputeNodeList.get_by_hypervisor') + @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', + return_value=objects.PciDeviceList()) + @mock.patch('nova.objects.ComputeNode.create') + @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') + @mock.patch('nova.compute.resource_tracker.ResourceTracker.' + '_update') + def test_compute_node_rebalanced(self, update_mock, get_mock, create_mock, + pci_mock, get_by_hypervisor_mock): + self._setup_rt() + self.driver_mock.rebalances_nodes = True + cn = copy.deepcopy(_COMPUTE_NODE_FIXTURES[0]) + cn.host = "old-host" + + def fake_get_all(_ctx, nodename): + return [cn] + + get_mock.side_effect = exc.NotFound + get_by_hypervisor_mock.side_effect = fake_get_all + resources = copy.deepcopy(_VIRT_DRIVER_AVAIL_RESOURCES) + + self.rt._init_compute_node(mock.sentinel.ctx, resources) + + get_mock.assert_called_once_with(mock.sentinel.ctx, _HOSTNAME, + _NODENAME) + get_by_hypervisor_mock.assert_called_once_with(mock.sentinel.ctx, + _NODENAME) + create_mock.assert_not_called() + update_mock.assert_called_once_with(mock.sentinel.ctx, cn) + + self.assertEqual(_HOSTNAME, self.rt.compute_nodes[_NODENAME].host) + + @mock.patch('nova.objects.ComputeNodeList.get_by_hypervisor') @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', return_value=objects.PciDeviceList(objects=[])) @mock.patch('nova.objects.ComputeNode.create') @@ -1019,10 +1053,55 @@ @mock.patch('nova.compute.resource_tracker.ResourceTracker.' '_update') def test_compute_node_created_on_empty(self, update_mock, get_mock, - create_mock, pci_tracker_mock): + create_mock, pci_tracker_mock, + get_by_hypervisor_mock): + get_by_hypervisor_mock.return_value = [] + self._test_compute_node_created(update_mock, get_mock, create_mock, + pci_tracker_mock, + get_by_hypervisor_mock) + + @mock.patch('nova.objects.ComputeNodeList.get_by_hypervisor') + @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', + return_value=objects.PciDeviceList(objects=[])) + @mock.patch('nova.objects.ComputeNode.create') + @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') + @mock.patch('nova.compute.resource_tracker.ResourceTracker.' + '_update') + def test_compute_node_created_on_empty_rebalance(self, update_mock, + get_mock, + create_mock, + pci_tracker_mock, + get_by_hypervisor_mock): + get_by_hypervisor_mock.return_value = [] + self._test_compute_node_created(update_mock, get_mock, create_mock, + pci_tracker_mock, + get_by_hypervisor_mock, + rebalances_nodes=True) + + @mock.patch('nova.objects.ComputeNodeList.get_by_hypervisor') + @mock.patch('nova.objects.PciDeviceList.get_by_compute_node', + return_value=objects.PciDeviceList(objects=[])) + @mock.patch('nova.objects.ComputeNode.create') + @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') + @mock.patch('nova.compute.resource_tracker.ResourceTracker.' + '_update') + def test_compute_node_created_too_many(self, update_mock, get_mock, + create_mock, pci_tracker_mock, + get_by_hypervisor_mock): + get_by_hypervisor_mock.return_value = ["fake_node_1", "fake_node_2"] + self._test_compute_node_created(update_mock, get_mock, create_mock, + pci_tracker_mock, + get_by_hypervisor_mock, + rebalances_nodes=True) + + def _test_compute_node_created(self, update_mock, get_mock, + create_mock, pci_tracker_mock, + get_by_hypervisor_mock, + rebalances_nodes=False): self.flags(cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0, disk_allocation_ratio=1.0) self._setup_rt() + self.driver_mock.rebalances_nodes = rebalances_nodes get_mock.side_effect = exc.NotFound @@ -1088,6 +1167,11 @@ cn = self.rt.compute_nodes[_NODENAME] get_mock.assert_called_once_with(mock.sentinel.ctx, _HOSTNAME, _NODENAME) + if rebalances_nodes: + get_by_hypervisor_mock.assert_called_once_with( + mock.sentinel.ctx, _NODENAME) + else: + get_by_hypervisor_mock.assert_not_called() create_mock.assert_called_once_with() self.assertTrue(obj_base.obj_equal_prims(expected_compute, cn)) pci_tracker_mock.assert_called_once_with(mock.sentinel.ctx, diff -Nru nova-16.1.0/nova/tests/unit/compute/test_shelve.py nova-16.1.2/nova/tests/unit/compute/test_shelve.py --- nova-16.1.0/nova/tests/unit/compute/test_shelve.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/compute/test_shelve.py 2018-04-25 09:22:33.000000000 +0000 @@ -430,6 +430,91 @@ block_device_info='fake_bdm') mock_get_power_state.assert_called_once_with(self.context, instance) + @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') + @mock.patch('nova.compute.utils.notify_about_instance_action') + @mock.patch.object(nova.compute.resource_tracker.ResourceTracker, + 'instance_claim') + @mock.patch.object(neutron_api.API, 'setup_instance_network_on_host') + @mock.patch.object(nova.virt.fake.SmallFakeDriver, 'spawn', + side_effect=test.TestingException('oops!')) + @mock.patch.object(nova.compute.manager.ComputeManager, + '_prep_block_device', return_value='fake_bdm') + @mock.patch.object(nova.compute.manager.ComputeManager, + '_notify_about_instance_usage') + @mock.patch('nova.utils.get_image_from_system_metadata') + @mock.patch.object(nova.compute.manager.ComputeManager, + '_terminate_volume_connections') + def test_unshelve_spawn_fails_cleanup_volume_connections( + self, mock_terminate_volume_connections, mock_image_meta, + mock_notify_instance_usage, mock_prep_block_device, mock_spawn, + mock_setup_network, mock_instance_claim, + mock_notify_instance_action, mock_get_bdms): + """Tests error handling when a instance fails to unshelve and makes + sure that volume connections are cleaned up from the host + and that the host/node values are unset on the instance. + """ + mock_bdms = mock.Mock() + mock_get_bdms.return_value = mock_bdms + instance = self._create_fake_instance_obj() + node = test_compute.NODENAME + limits = {} + filter_properties = {'limits': limits} + instance.task_state = task_states.UNSHELVING + instance.save() + image_meta = {'properties': {'base_image_ref': uuids.image_id}} + mock_image_meta.return_value = image_meta + + tracking = {'last_state': instance.task_state} + + def fake_claim(context, instance, node, limits): + instance.host = self.compute.host + instance.node = node + requests = objects.InstancePCIRequests(requests=[]) + return claims.Claim(context, instance, node, + self.rt, _fake_resources(), + requests, limits=limits) + mock_instance_claim.side_effect = fake_claim + + def check_save(expected_task_state=None): + if tracking['last_state'] == task_states.UNSHELVING: + # This is before we've failed. + self.assertEqual(task_states.SPAWNING, instance.task_state) + tracking['last_state'] = instance.task_state + elif tracking['last_state'] == task_states.SPAWNING: + # This is after we've failed. + self.assertIsNone(instance.host) + self.assertIsNone(instance.node) + self.assertIsNone(instance.task_state) + tracking['last_state'] = instance.task_state + else: + self.fail('Unexpected save!') + + with mock.patch.object(instance, 'save') as mock_save: + mock_save.side_effect = check_save + self.assertRaises(test.TestingException, + self.compute.unshelve_instance, + self.context, instance, image=None, + filter_properties=filter_properties, node=node) + + mock_notify_instance_action.assert_called_once_with( + self.context, instance, 'fake-mini', action='unshelve', + phase='start') + mock_notify_instance_usage.assert_called_once_with( + self.context, instance, 'unshelve.start') + mock_prep_block_device.assert_called_once_with( + self.context, instance, mock_bdms) + mock_setup_network.assert_called_once_with(self.context, instance, + self.compute.host) + mock_instance_claim.assert_called_once_with(self.context, instance, + test_compute.NODENAME, + limits) + mock_spawn.assert_called_once_with( + self.context, instance, test.MatchType(objects.ImageMeta), + injected_files=[], admin_password=None, + network_info=[], block_device_info='fake_bdm') + mock_terminate_volume_connections.assert_called_once_with( + self.context, instance, mock_bdms) + @mock.patch.object(objects.InstanceList, 'get_by_filters') def test_shelved_poll_none_offloaded(self, mock_get_by_filters): # Test instances are not offloaded when shelved_offload_time is -1 diff -Nru nova-16.1.0/nova/tests/unit/conductor/test_conductor.py nova-16.1.2/nova/tests/unit/conductor/test_conductor.py --- nova-16.1.0/nova/tests/unit/conductor/test_conductor.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/conductor/test_conductor.py 2018-04-25 09:22:33.000000000 +0000 @@ -1802,12 +1802,15 @@ select_dest, build_and_run): def _fake_bury(ctxt, request_spec, exc, - build_requests=None, instances=None): + build_requests=None, instances=None, + block_device_mapping=None): self.assertIn('not mapped to any cell', str(exc)) self.assertEqual(1, len(build_requests)) self.assertEqual(1, len(instances)) self.assertEqual(build_requests[0].instance_uuid, instances[0].uuid) + self.assertEqual(self.params['block_device_mapping'], + block_device_mapping) bury.side_effect = _fake_bury select_dest.return_value = [{'host': 'missing-host', @@ -1954,6 +1957,27 @@ self.assertEqual(expected, inst_states) + @mock.patch.object(objects.CellMapping, 'get_by_uuid') + @mock.patch.object(conductor_manager.ComputeTaskManager, + '_create_block_device_mapping') + def test_bury_in_cell0_with_block_device_mapping(self, mock_create_bdm, + mock_get_cell): + mock_get_cell.return_value = self.cell_mappings['cell0'] + + inst_br = fake_build_request.fake_req_obj(self.ctxt) + del inst_br.instance.id + inst_br.create() + inst = inst_br.get_new_instance(self.ctxt) + + self.conductor._bury_in_cell0( + self.ctxt, self.params['request_specs'][0], Exception('Foo'), + build_requests=[inst_br], instances=[inst], + block_device_mapping=self.params['block_device_mapping']) + + mock_create_bdm.assert_called_once_with( + self.cell_mappings['cell0'], inst.flavor, inst.uuid, + self.params['block_device_mapping']) + def test_reset(self): with mock.patch('nova.compute.rpcapi.ComputeAPI') as mock_rpc: old_rpcapi = self.conductor_manager.compute_rpcapi diff -Nru nova-16.1.0/nova/tests/unit/image/test_glance.py nova-16.1.2/nova/tests/unit/image/test_glance.py --- nova-16.1.0/nova/tests/unit/image/test_glance.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/image/test_glance.py 2018-04-25 09:22:33.000000000 +0000 @@ -1595,6 +1595,16 @@ self.assertRaises(exception.ImageNotFound, service.delete, ctx, mock.sentinel.image_id) + def test_delete_client_conflict_failure_v2(self): + client = mock.MagicMock() + fake_details = 'Image %s is in use' % mock.sentinel.image_id + client.call.side_effect = glanceclient.exc.HTTPConflict( + details=fake_details) + ctx = mock.sentinel.ctx + service = glance.GlanceImageServiceV2(client) + self.assertRaises(exception.ImageDeleteConflict, service.delete, ctx, + mock.sentinel.image_id) + class TestGlanceApiServers(test.NoDBTestCase): diff -Nru nova-16.1.0/nova/tests/unit/network/test_network_info.py nova-16.1.2/nova/tests/unit/network/test_network_info.py --- nova-16.1.0/nova/tests/unit/network/test_network_info.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/network/test_network_info.py 2018-04-25 09:22:25.000000000 +0000 @@ -423,6 +423,11 @@ ] * 2 self.assertEqual(fixed_ips, ips) + def test_vif_get_fixed_ips_network_is_none(self): + vif = model.VIF() + fixed_ips = vif.fixed_ips() + self.assertEqual([], fixed_ips) + def test_vif_get_floating_ips(self): vif = fake_network_cache_model.new_vif() vif['network']['subnets'][0]['ips'][0].add_floating_ip('192.168.1.1') diff -Nru nova-16.1.0/nova/tests/unit/network/test_neutronv2.py nova-16.1.2/nova/tests/unit/network/test_neutronv2.py --- nova-16.1.0/nova/tests/unit/network/test_neutronv2.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/network/test_neutronv2.py 2018-04-25 09:22:33.000000000 +0000 @@ -184,9 +184,10 @@ auth_token='token', is_admin=False) client = neutronapi.get_client(my_context) - self.assertRaises( + exc = self.assertRaises( exception.Forbidden, client.create_port) + self.assertIsInstance(exc.format_message(), six.text_type) def test_withtoken_context_is_admin(self): self.flags(url='http://anyhost/', group='neutron') diff -Nru nova-16.1.0/nova/tests/unit/objects/test_host_mapping.py nova-16.1.2/nova/tests/unit/objects/test_host_mapping.py --- nova-16.1.0/nova/tests/unit/objects/test_host_mapping.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/objects/test_host_mapping.py 2018-04-25 09:22:33.000000000 +0000 @@ -211,3 +211,73 @@ self.assertTrue(mock_hm_create.called) self.assertEqual(['d'], [hm.host for hm in hms]) + + @mock.patch('nova.objects.CellMappingList.get_all') + @mock.patch('nova.objects.HostMapping.get_by_host') + @mock.patch('nova.objects.HostMapping.create') + @mock.patch('nova.objects.ServiceList.get_by_binary') + def test_discover_services(self, mock_srv, mock_hm_create, + mock_hm_get, mock_cm): + mock_cm.return_value = [ + objects.CellMapping(uuid=uuids.cell1), + objects.CellMapping(uuid=uuids.cell2), + ] + mock_srv.side_effect = [ + [objects.Service(host='host1'), + objects.Service(host='host2')], + [objects.Service(host='host3')], + ] + + def fake_get_host_mapping(ctxt, host): + if host == 'host2': + return + else: + raise exception.HostMappingNotFound(name=host) + + mock_hm_get.side_effect = fake_get_host_mapping + + ctxt = context.get_admin_context() + mappings = host_mapping.discover_hosts(ctxt, by_service=True) + self.assertEqual(2, len(mappings)) + self.assertEqual(['host1', 'host3'], + sorted([m.host for m in mappings])) + + @mock.patch('nova.objects.CellMapping.get_by_uuid') + @mock.patch('nova.objects.HostMapping.get_by_host') + @mock.patch('nova.objects.HostMapping.create') + @mock.patch('nova.objects.ServiceList.get_by_binary') + def test_discover_services_one_cell(self, mock_srv, mock_hm_create, + mock_hm_get, mock_cm): + mock_cm.return_value = objects.CellMapping(uuid=uuids.cell1) + mock_srv.return_value = [ + objects.Service(host='host1'), + objects.Service(host='host2'), + ] + + def fake_get_host_mapping(ctxt, host): + if host == 'host2': + return + else: + raise exception.HostMappingNotFound(name=host) + + mock_hm_get.side_effect = fake_get_host_mapping + + lines = [] + + def fake_status(msg): + lines.append(msg) + + ctxt = context.get_admin_context() + mappings = host_mapping.discover_hosts(ctxt, cell_uuid=uuids.cell1, + status_fn=fake_status, + by_service=True) + self.assertEqual(1, len(mappings)) + self.assertEqual(['host1'], + sorted([m.host for m in mappings])) + + expected = """\ +Getting computes from cell: %(cell)s +Creating host mapping for service host1 +Found 1 unmapped computes in cell: %(cell)s""" % {'cell': uuids.cell1} + + self.assertEqual(expected, '\n'.join(lines)) diff -Nru nova-16.1.0/nova/tests/unit/objects/test_instance.py nova-16.1.2/nova/tests/unit/objects/test_instance.py --- nova-16.1.0/nova/tests/unit/objects/test_instance.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/objects/test_instance.py 2018-04-25 09:22:33.000000000 +0000 @@ -222,6 +222,22 @@ deleted=True) self.assertEqual(0, len(instance.tags)) + def test_lazy_load_generic_on_deleted_instance(self): + # For generic fields, we try to load the deleted record from the + # database. + instance = objects.Instance(self.context, uuid=uuids.instance, + user_id=self.context.user_id, + project_id=self.context.project_id) + instance.create() + instance.destroy() + # Re-create our local object to make sure it doesn't have sysmeta + # filled in by create() + instance = objects.Instance(self.context, uuid=uuids.instance, + user_id=self.context.user_id, + project_id=self.context.project_id) + self.assertNotIn('system_metadata', instance) + self.assertEqual(0, len(instance.system_metadata)) + def test_lazy_load_tags(self): instance = objects.Instance(self.context, uuid=uuids.instance, user_id=self.context.user_id, diff -Nru nova-16.1.0/nova/tests/unit/objects/test_request_spec.py nova-16.1.2/nova/tests/unit/objects/test_request_spec.py --- nova-16.1.0/nova/tests/unit/objects/test_request_spec.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/objects/test_request_spec.py 2018-04-25 09:22:33.000000000 +0000 @@ -535,13 +535,14 @@ # object fields for field in ['image', 'numa_topology', 'pci_requests', 'flavor', - 'retry', 'limits']: + 'limits']: self.assertEqual( getattr(req_obj, field).obj_to_primitive(), getattr(serialized_obj, field).obj_to_primitive()) self.assertIsNone(serialized_obj.instance_group.members) self.assertIsNone(serialized_obj.instance_group.hosts) + self.assertIsNone(serialized_obj.retry) def test_create(self): req_obj = fake_request_spec.fake_spec_obj(remove_id=True) diff -Nru nova-16.1.0/nova/tests/unit/objects/test_resource_provider.py nova-16.1.2/nova/tests/unit/objects/test_resource_provider.py --- nova-16.1.0/nova/tests/unit/objects/test_resource_provider.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/objects/test_resource_provider.py 2018-04-25 09:22:33.000000000 +0000 @@ -513,9 +513,13 @@ consumer_id=uuids.fake_instance, used=8) alloc_list = objects.AllocationList(self.context, objects=[obj]) - self.assertNotIn("id", obj) alloc_list.create_all() - self.assertIn("id", obj) + + rp_al = resource_provider.AllocationList + saved_allocations = rp_al.get_all_by_resource_provider_uuid( + self.context, rp.uuid) + self.assertEqual(1, len(saved_allocations)) + self.assertEqual(obj.used, saved_allocations[0].used) def test_create_with_id_fails(self): rp = objects.ResourceProvider(context=self.context, diff -Nru nova-16.1.0/nova/tests/unit/scheduler/client/test_report.py nova-16.1.2/nova/tests/unit/scheduler/client/test_report.py --- nova-16.1.0/nova/tests/unit/scheduler/client/test_report.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/scheduler/client/test_report.py 2018-04-25 09:22:33.000000000 +0000 @@ -1019,16 +1019,19 @@ '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_resource_provider') - def test_ensure_resource_provider_create_none(self, get_rp_mock, + def test_ensure_resource_provider_create_fail(self, get_rp_mock, get_agg_mock, create_rp_mock): # No resource provider exists in the client's cache, and - # _create_provider returns None, indicating there was an error with the + # _create_provider raises, indicating there was an error with the # create call. Ensure we don't populate the resource provider cache # with a None value. get_rp_mock.return_value = None - create_rp_mock.return_value = None + create_rp_mock.side_effect = exception.ResourceProviderCreationFailed( + name=uuids.compute_node) - self.client._ensure_resource_provider(uuids.compute_node) + self.assertRaises( + exception.ResourceProviderCreationFailed, + self.client._ensure_resource_provider, uuids.compute_node) get_rp_mock.assert_called_once_with(uuids.compute_node) create_rp_mock.assert_called_once_with(uuids.compute_node, @@ -1165,7 +1168,9 @@ 'openstack-request-id': uuids.request_id} uuid = uuids.compute_node - result = self.client._get_resource_provider(uuid) + self.assertRaises( + exception.ResourceProviderRetrievalFailed, + self.client._get_resource_provider, uuid) expected_url = '/resource_providers/' + uuid self.ks_sess_mock.get.assert_called_once_with(expected_url, @@ -1177,7 +1182,6 @@ self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) - self.assertIsNone(result) def test_create_resource_provider(self): # Ensure _create_resource_provider() returns a dict of resource @@ -1219,10 +1223,10 @@ # record. uuid = uuids.compute_node name = 'computehost' - resp_mock = mock.Mock(status_code=409) - self.ks_sess_mock.post.return_value = resp_mock - self.ks_sess_mock.post.return_value.headers = { - 'openstack-request-id': uuids.request_id} + self.ks_sess_mock.post.return_value = mock.Mock( + status_code=409, + headers={'openstack-request-id': uuids.request_id}, + text='not a name conflict') get_rp_mock.return_value = mock.sentinel.get_rp @@ -1244,6 +1248,18 @@ self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) + def test_create_resource_provider_name_conflict(self): + # When the API call to create the resource provider fails 409 with a + # name conflict, we raise an exception. + self.ks_sess_mock.post.return_value = mock.Mock( + status_code=409, + text='Conflicting resource provider name: foo already ' + 'exists.') + + self.assertRaises( + exception.ResourceProviderCreationFailed, + self.client._create_resource_provider, uuids.compute_node, 'foo') + @mock.patch.object(report.LOG, 'error') def test_create_resource_provider_error(self, logging_mock): # Ensure _create_resource_provider() sets the error flag when trying to @@ -1256,7 +1272,9 @@ self.ks_sess_mock.post.return_value.headers = { 'x-openstack-request-id': uuids.request_id} - result = self.client._create_resource_provider(uuid, name) + self.assertRaises( + exception.ResourceProviderCreationFailed, + self.client._create_resource_provider, uuid, name) expected_payload = { 'uuid': uuid, @@ -1274,7 +1292,6 @@ self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) - self.assertFalse(result) class TestAggregates(SchedulerReportClientTestCase): diff -Nru nova-16.1.0/nova/tests/unit/test_nova_manage.py nova-16.1.2/nova/tests/unit/test_nova_manage.py --- nova-16.1.0/nova/tests/unit/test_nova_manage.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/test_nova_manage.py 2018-04-25 09:22:33.000000000 +0000 @@ -1434,6 +1434,15 @@ # Check the return when strict=False self.assertIsNone(self.commands.discover_hosts()) + @mock.patch('nova.objects.host_mapping.discover_hosts') + def test_discover_hosts_by_service(self, mock_discover_hosts): + mock_discover_hosts.return_value = ['fake'] + ret = self.commands.discover_hosts(by_service=True, strict=True) + self.assertEqual(0, ret) + mock_discover_hosts.assert_called_once_with(mock.ANY, None, + mock.ANY, + True) + def test_validate_transport_url_in_conf(self): from_conf = 'fake://user:pass@host:port/' self.flags(transport_url=from_conf) diff -Nru nova-16.1.0/nova/tests/unit/virt/disk/vfs/test_guestfs.py nova-16.1.2/nova/tests/unit/virt/disk/vfs/test_guestfs.py --- nova-16.1.0/nova/tests/unit/virt/disk/vfs/test_guestfs.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/virt/disk/vfs/test_guestfs.py 2018-04-25 09:22:25.000000000 +0000 @@ -335,7 +335,8 @@ m.launch.side_effect = Exception vfs = vfsimpl.VFSGuestFS(self.qcowfile) mock_access.return_value = False - with mock.patch('eventlet.tpool.Proxy', return_value=m): + self.flags(debug=False, group='guestfs') + with mock.patch('eventlet.tpool.Proxy', return_value=m) as tpool_mock: self.assertRaises(exception.LibguestfsCannotReadKernel, vfs.inspect_capabilities) m.add_drive.assert_called_once_with('/dev/null') @@ -343,3 +344,17 @@ mock_access.assert_called_once_with('/boot/vmlinuz-kernel_name', mock.ANY) mock_uname.assert_called_once_with() + self.assertEqual(1, tpool_mock.call_count) + + def test_appliance_setup_inspect_capabilties_debug_mode(self): + """Asserts that we do not use an eventlet thread pool when guestfs + debug logging is enabled. + """ + # We can't actually mock guestfs.GuestFS because it's an optional + # native package import. All we really care about here is that + # eventlet isn't used. + self.flags(debug=True, group='guestfs') + vfs = vfsimpl.VFSGuestFS(self.qcowfile) + with mock.patch('eventlet.tpool.Proxy', + new_callable=mock.NonCallableMock): + vfs.inspect_capabilities() diff -Nru nova-16.1.0/nova/tests/unit/virt/ironic/test_driver.py nova-16.1.2/nova/tests/unit/virt/ironic/test_driver.py --- nova-16.1.0/nova/tests/unit/virt/ironic/test_driver.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/virt/ironic/test_driver.py 2018-04-25 09:22:33.000000000 +0000 @@ -734,10 +734,13 @@ self.assertEqual(sorted(expected_uuids), sorted(available_nodes)) @mock.patch.object(ironic_driver.IronicDriver, + '_node_resources_used', return_value=False) + @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') - def test_get_inventory_no_rc(self, mock_nfc, mock_nr, mock_res_unavail): + def test_get_inventory_no_rc(self, mock_nfc, mock_nr, mock_res_unavail, + mock_res_used): """Ensure that when node.resource_class is missing, that we return the legacy VCPU, MEMORY_MB and DISK_GB resources for inventory. """ @@ -781,14 +784,18 @@ } mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) + mock_res_used.assert_called_once_with(mock_nfc.return_value) mock_res_unavail.assert_called_once_with(mock_nfc.return_value) self.assertEqual(expected, result) @mock.patch.object(ironic_driver.IronicDriver, + '_node_resources_used', return_value=False) + @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') - def test_get_inventory_with_rc(self, mock_nfc, mock_nr, mock_res_unavail): + def test_get_inventory_with_rc(self, mock_nfc, mock_nr, mock_res_unavail, + mock_res_used): """Ensure that when node.resource_class is present, that we return the legacy VCPU, MEMORY_MB and DISK_GB resources for inventory in addition to the custom resource class inventory record. @@ -841,14 +848,18 @@ } mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) + mock_res_used.assert_called_once_with(mock_nfc.return_value) mock_res_unavail.assert_called_once_with(mock_nfc.return_value) self.assertEqual(expected, result) @mock.patch.object(ironic_driver.IronicDriver, + '_node_resources_used', return_value=False) + @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') - def test_get_inventory_only_rc(self, mock_nfc, mock_nr, mock_res_unavail): + def test_get_inventory_only_rc(self, mock_nfc, mock_nr, mock_res_unavail, + mock_res_used): """Ensure that when node.resource_class is present, that we return the legacy VCPU, MEMORY_MB and DISK_GB resources for inventory in addition to the custom resource class inventory record. @@ -877,15 +888,18 @@ } mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) + mock_res_used.assert_called_once_with(mock_nfc.return_value) mock_res_unavail.assert_called_once_with(mock_nfc.return_value) self.assertEqual(expected, result) @mock.patch.object(ironic_driver.IronicDriver, + '_node_resources_used', return_value=True) + @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=False) @mock.patch.object(ironic_driver.IronicDriver, '_node_resource') @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') def test_get_inventory_with_rc_occupied(self, mock_nfc, mock_nr, - mock_res_unavail): + mock_res_unavail, mock_res_used): """Ensure that when a node is used, we report the inventory matching the consumed resources. """ @@ -937,18 +951,23 @@ } mock_nfc.assert_called_once_with(mock.sentinel.nodename) mock_nr.assert_called_once_with(mock_nfc.return_value) - mock_res_unavail.assert_called_once_with(mock_nfc.return_value) + mock_res_used.assert_called_once_with(mock_nfc.return_value) + self.assertFalse(mock_res_unavail.called) self.assertEqual(expected, result) @mock.patch.object(ironic_driver.IronicDriver, + '_node_resources_used', return_value=False) + @mock.patch.object(ironic_driver.IronicDriver, '_node_resources_unavailable', return_value=True) @mock.patch.object(ironic_driver.IronicDriver, '_node_from_cache') - def test_get_inventory_disabled_node(self, mock_nfc, mock_res_unavail): + def test_get_inventory_disabled_node(self, mock_nfc, mock_res_unavail, + mock_res_used): """Ensure that when a node is disabled, that get_inventory() returns an empty dict. """ result = self.driver.get_inventory(mock.sentinel.nodename) mock_nfc.assert_called_once_with(mock.sentinel.nodename) + mock_res_used.assert_called_once_with(mock_nfc.return_value) mock_res_unavail.assert_called_once_with(mock_nfc.return_value) self.assertEqual({}, result) diff -Nru nova-16.1.0/nova/tests/unit/virt/libvirt/test_config.py nova-16.1.2/nova/tests/unit/virt/libvirt/test_config.py --- nova-16.1.0/nova/tests/unit/virt/libvirt/test_config.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/virt/libvirt/test_config.py 2018-04-25 09:22:33.000000000 +0000 @@ -269,6 +269,15 @@ """) + def test_config_simple_pcid(self): + obj = config.LibvirtConfigGuestCPUFeature("pcid") + obj.policy = "require" + + xml = obj.to_xml() + self.assertXmlEqual(xml, """ + + """) + class LibvirtConfigGuestCPUNUMATest(LibvirtConfigBaseTest): diff -Nru nova-16.1.0/nova/tests/unit/virt/libvirt/test_driver.py nova-16.1.2/nova/tests/unit/virt/libvirt/test_driver.py --- nova-16.1.0/nova/tests/unit/virt/libvirt/test_driver.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/virt/libvirt/test_driver.py 2018-04-25 09:22:33.000000000 +0000 @@ -1288,6 +1288,39 @@ mock_guest.set_user_password.assert_called_once_with("root", "123") + @mock.patch('nova.objects.Instance.save') + @mock.patch('oslo_serialization.base64.encode_as_text') + @mock.patch('nova.api.metadata.password.convert_password') + @mock.patch('nova.crypto.ssh_encrypt_text') + @mock.patch('nova.utils.get_image_from_system_metadata') + @mock.patch.object(host.Host, + 'has_min_version', return_value=True) + @mock.patch('nova.virt.libvirt.host.Host.get_guest') + def test_set_admin_password_saves_sysmeta(self, mock_get_guest, + ver, mock_image, mock_encrypt, + mock_convert, mock_encode, + mock_save): + self.flags(virt_type='kvm', group='libvirt') + instance = objects.Instance(**self.test_instance) + # Password will only be saved in sysmeta if the key_data is present + instance.key_data = 'ssh-rsa ABCFEFG' + mock_image.return_value = {"properties": { + "hw_qemu_guest_agent": "yes"}} + mock_guest = mock.Mock(spec=libvirt_guest.Guest) + mock_get_guest.return_value = mock_guest + mock_convert.return_value = {'password_0': 'converted-password'} + + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) + drvr.set_admin_password(instance, "123") + + mock_guest.set_user_password.assert_called_once_with("root", "123") + mock_encrypt.assert_called_once_with(instance.key_data, '123') + mock_encode.assert_called_once_with(mock_encrypt.return_value) + mock_convert.assert_called_once_with(None, mock_encode.return_value) + self.assertEqual('converted-password', + instance.system_metadata['password_0']) + mock_save.assert_called_once_with() + @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.host.Host.get_guest') @@ -1390,8 +1423,11 @@ mock_get_guest.return_value = mock_guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) - self.assertRaises(exception.NovaException, - drvr.set_admin_password, instance, "123") + with mock.patch.object( + drvr, '_save_instance_password_if_sshkey_present') as save_p: + self.assertRaises(exception.NovaException, + drvr.set_admin_password, instance, "123") + save_p.assert_not_called() @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(host.Host, @@ -2977,8 +3013,7 @@ id=1, cpuset=set([2, 3]), memory=1024, pagesize=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, - cpu_pinning={2: 7, 3: 8}, - cpuset_reserved=set([]))]) + cpu_pinning={2: 7, 3: 8})]) instance_ref = objects.Instance(**self.test_instance) instance_ref.numa_topology = instance_topology @@ -3604,6 +3639,158 @@ self.assertEqual(cfg.devices[4].model, 'virtio-scsi') mock_save.assert_called_with() + def test_get_guest_config_one_scsi_volume_with_configdrive(self): + """Tests that the unit attribute is only incremented for block devices + that have a scsi bus. Unit numbering should begin at 0 since we are not + booting from volume. + """ + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) + + image_meta = objects.ImageMeta.from_dict({ + "disk_format": "raw", + "properties": {"hw_scsi_model": "virtio-scsi", + "hw_disk_bus": "scsi"}}) + instance_ref = objects.Instance(**self.test_instance) + instance_ref.config_drive = 'True' + conn_info = {'driver_volume_type': 'fake'} + bdms = block_device_obj.block_device_make_list_from_dicts( + self.context, [ + fake_block_device.FakeDbBlockDeviceDict( + {'id': 1, + 'source_type': 'volume', 'destination_type': 'volume', + 'device_name': '/dev/sdc', 'disk_bus': 'scsi'}), + ] + ) + bd_info = { + 'block_device_mapping': driver_block_device.convert_volumes(bdms)} + bd_info['block_device_mapping'][0]['connection_info'] = conn_info + + disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, + instance_ref, + image_meta, + bd_info) + with mock.patch.object( + driver_block_device.DriverVolumeBlockDevice, 'save'): + cfg = drvr._get_guest_config(instance_ref, [], image_meta, + disk_info, [], bd_info) + + # The device order is determined by the order that devices are + # appended in _get_guest_storage_config in the driver. + + # The first device will be the instance's local disk (since we're + # not booting from volume). It should begin unit numbering at 0. + self.assertIsInstance(cfg.devices[0], + vconfig.LibvirtConfigGuestDisk) + self.assertIn('disk', cfg.devices[0].source_path) + self.assertEqual('sda', cfg.devices[0].target_dev) + self.assertEqual('scsi', cfg.devices[0].target_bus) + self.assertEqual(0, cfg.devices[0].device_addr.unit) + + # The second device will be the ephemeral disk + # (the flavor in self.test_instance has ephemeral_gb > 0). + # It should have the next unit number of 1. + self.assertIsInstance(cfg.devices[1], + vconfig.LibvirtConfigGuestDisk) + self.assertIn('disk.local', cfg.devices[1].source_path) + self.assertEqual('sdb', cfg.devices[1].target_dev) + self.assertEqual('scsi', cfg.devices[1].target_bus) + self.assertEqual(1, cfg.devices[1].device_addr.unit) + + # This is the config drive. It should not have unit number set. + self.assertIsInstance(cfg.devices[2], + vconfig.LibvirtConfigGuestDisk) + self.assertIn('disk.config', cfg.devices[2].source_path) + self.assertEqual('hda', cfg.devices[2].target_dev) + self.assertEqual('ide', cfg.devices[2].target_bus) + self.assertIsNone(cfg.devices[2].device_addr) + + # And this is the attached volume. + self.assertIsInstance(cfg.devices[3], + vconfig.LibvirtConfigGuestDisk) + self.assertEqual('sdc', cfg.devices[3].target_dev) + self.assertEqual('scsi', cfg.devices[3].target_bus) + self.assertEqual(2, cfg.devices[3].device_addr.unit) + + def test_get_guest_config_boot_from_volume_with_configdrive(self): + """Tests that the unit attribute is only incremented for block devices + that have a scsi bus and that the bootable volume in a boot-from-volume + scenario always has the unit set to 0. + """ + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) + + image_meta = objects.ImageMeta.from_dict({ + "disk_format": "raw", + "properties": {"hw_scsi_model": "virtio-scsi", + "hw_disk_bus": "scsi"}}) + instance_ref = objects.Instance(**self.test_instance) + instance_ref.config_drive = 'True' + conn_info = {'driver_volume_type': 'fake'} + bdms = block_device_obj.block_device_make_list_from_dicts( + self.context, [ + # This is the boot volume (boot_index = 0). + fake_block_device.FakeDbBlockDeviceDict( + {'id': 1, + 'source_type': 'volume', 'destination_type': 'volume', + 'device_name': '/dev/sda', 'boot_index': 0}), + # This is just another attached volume. + fake_block_device.FakeDbBlockDeviceDict( + {'id': 2, + 'source_type': 'volume', 'destination_type': 'volume', + 'device_name': '/dev/sdc', 'disk_bus': 'scsi'}), + ] + ) + bd_info = { + 'block_device_mapping': driver_block_device.convert_volumes(bdms)} + bd_info['block_device_mapping'][0]['connection_info'] = conn_info + bd_info['block_device_mapping'][1]['connection_info'] = conn_info + + disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, + instance_ref, + image_meta, + bd_info) + with mock.patch.object( + driver_block_device.DriverVolumeBlockDevice, 'save'): + cfg = drvr._get_guest_config(instance_ref, [], image_meta, + disk_info, [], bd_info) + + # The device order is determined by the order that devices are + # appended in _get_guest_storage_config in the driver. + + # The first device will be the ephemeral disk + # (the flavor in self.test_instance has ephemeral_gb > 0). + # It should begin unit numbering at 1 because 0 is reserved for the + # boot volume for boot-from-volume. + self.assertIsInstance(cfg.devices[0], + vconfig.LibvirtConfigGuestDisk) + self.assertIn('disk.local', cfg.devices[0].source_path) + self.assertEqual('sdb', cfg.devices[0].target_dev) + self.assertEqual('scsi', cfg.devices[0].target_bus) + self.assertEqual(1, cfg.devices[0].device_addr.unit) + + # The second device will be the config drive. It should not have a + # unit number set. + self.assertIsInstance(cfg.devices[1], + vconfig.LibvirtConfigGuestDisk) + self.assertIn('disk.config', cfg.devices[1].source_path) + self.assertEqual('hda', cfg.devices[1].target_dev) + self.assertEqual('ide', cfg.devices[1].target_bus) + self.assertIsNone(cfg.devices[1].device_addr) + + # The third device will be the boot volume. It should have a + # unit number of 0. + self.assertIsInstance(cfg.devices[2], + vconfig.LibvirtConfigGuestDisk) + self.assertEqual('sda', cfg.devices[2].target_dev) + self.assertEqual('scsi', cfg.devices[2].target_bus) + self.assertEqual(0, cfg.devices[2].device_addr.unit) + + # The fourth device will be the other attached volume. + self.assertIsInstance(cfg.devices[3], + vconfig.LibvirtConfigGuestDisk) + self.assertEqual('sdc', cfg.devices[3].target_dev) + self.assertEqual('scsi', cfg.devices[3].target_bus) + self.assertEqual(2, cfg.devices[3].device_addr.unit) + def test_get_guest_config_with_vnc(self): self.flags(enabled=True, vncserver_listen='10.0.0.1', @@ -4127,8 +4314,9 @@ @mock.patch.object(dmcrypt, 'delete_volume') @mock.patch.object(conn._host, 'get_domain', return_value=dom) - def detach_encrypted_volumes(block_device_info, mock_get_domain, - mock_delete_volume): + @mock.patch.object(libvirt_driver.disk_api, 'get_allocated_disk_size') + def detach_encrypted_volumes(block_device_info, mock_get_alloc_size, + mock_get_domain, mock_delete_volume): conn._detach_encrypted_volumes(instance, block_device_info) mock_get_domain.assert_called_once_with(instance) @@ -5891,6 +6079,83 @@ self.assertEqual(conf.cpu.cores, 1) self.assertEqual(conf.cpu.threads, 1) + @mock.patch.object(libvirt_driver.LOG, 'warning') + def test_get_guest_cpu_config_custom_with_extra_flags(self, + mock_warn): + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) + instance_ref = objects.Instance(**self.test_instance) + image_meta = objects.ImageMeta.from_dict(self.test_image_meta) + + self.flags(cpu_mode="custom", + cpu_model="IvyBridge", + cpu_model_extra_flags="pcid", + group='libvirt') + disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, + instance_ref, + image_meta) + conf = drvr._get_guest_config(instance_ref, + _fake_network_info(self, 1), + image_meta, disk_info) + self.assertIsInstance(conf.cpu, + vconfig.LibvirtConfigGuestCPU) + self.assertEqual(conf.cpu.mode, "custom") + self.assertEqual(conf.cpu.model, "IvyBridge") + self.assertIn(conf.cpu.features.pop().name, "pcid") + self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus) + self.assertEqual(conf.cpu.cores, 1) + self.assertEqual(conf.cpu.threads, 1) + self.assertFalse(mock_warn.called) + + @mock.patch.object(libvirt_driver.LOG, 'warning') + def test_get_guest_cpu_config_host_model_with_extra_flags(self, + mock_warn): + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) + instance_ref = objects.Instance(**self.test_instance) + image_meta = objects.ImageMeta.from_dict(self.test_image_meta) + + self.flags(cpu_mode="host-model", + cpu_model_extra_flags="pcid", + group='libvirt') + disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, + instance_ref, + image_meta) + conf = drvr._get_guest_config(instance_ref, + _fake_network_info(self, 1), + image_meta, disk_info) + self.assertIsInstance(conf.cpu, + vconfig.LibvirtConfigGuestCPU) + self.assertEqual(conf.cpu.mode, "host-model") + self.assertEqual(len(conf.cpu.features), 0) + self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus) + self.assertEqual(conf.cpu.cores, 1) + self.assertEqual(conf.cpu.threads, 1) + self.assertTrue(mock_warn.called) + + @mock.patch.object(libvirt_driver.LOG, 'warning') + def test_get_guest_cpu_config_host_passthrough_with_extra_flags(self, + mock_warn): + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) + instance_ref = objects.Instance(**self.test_instance) + image_meta = objects.ImageMeta.from_dict(self.test_image_meta) + + self.flags(cpu_mode="host-passthrough", + cpu_model_extra_flags="pcid", + group='libvirt') + disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, + instance_ref, + image_meta) + conf = drvr._get_guest_config(instance_ref, + _fake_network_info(self, 1), + image_meta, disk_info) + self.assertIsInstance(conf.cpu, + vconfig.LibvirtConfigGuestCPU) + self.assertEqual(conf.cpu.mode, "host-passthrough") + self.assertEqual(len(conf.cpu.features), 0) + self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus) + self.assertEqual(conf.cpu.cores, 1) + self.assertEqual(conf.cpu.threads, 1) + self.assertTrue(mock_warn.called) + def test_get_guest_cpu_topology(self): instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.vcpus = 8 @@ -6707,8 +6972,10 @@ mock_disconnect_volume.assert_called_with( connection_info, 'vdc', instance) + @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') @mock.patch('nova.virt.libvirt.host.Host.get_domain') - def test_detach_volume_disk_not_found(self, mock_get_domain): + def test_detach_volume_disk_not_found(self, mock_get_domain, + mock_disconnect_volume): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) mock_xml_without_disk = """ @@ -6724,10 +6991,115 @@ mock_dom.info.return_value = [power_state.RUNNING, 512, 512, 2, 1234, 5678] mock_get_domain.return_value = mock_dom - self.assertRaises(exception.DiskNotFound, drvr.detach_volume, - connection_info, instance, '/dev/vdc') + + drvr.detach_volume(connection_info, instance, '/dev/vdc') mock_get_domain.assert_called_once_with(instance) + mock_disconnect_volume.assert_called_once_with( + connection_info, 'vdc', instance) + + @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') + @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') + @mock.patch('nova.virt.libvirt.host.Host.get_domain') + def test_detach_volume_disk_not_found_encryption(self, mock_get_domain, + mock_disconnect_volume, + mock_get_encryptor): + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) + instance = objects.Instance(**self.test_instance) + mock_xml_without_disk = """ + + +""" + mock_dom = mock.MagicMock(return_value=mock_xml_without_disk) + encryption = {"provider": "NoOpEncryptor"} + mock_encryptor = mock.MagicMock(spec=encryptors.nop.NoOpEncryptor) + mock_get_encryptor.return_value = mock_encryptor + + connection_info = {"driver_volume_type": "fake", + "data": {"device_path": "/fake", + "access_mode": "rw"}} + + mock_dom.info.return_value = [power_state.RUNNING, 512, 512, 2, 1234, + 5678] + mock_get_domain.return_value = mock_dom + + drvr.detach_volume(connection_info, instance, '/dev/vdc', + encryption) + mock_get_encryptor.assert_called_once_with(connection_info, encryption) + mock_encryptor.detach_volume.assert_called_once_with(**encryption) + mock_disconnect_volume.assert_called_once_with( + connection_info, 'vdc', instance) + + @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') + @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') + @mock.patch('nova.virt.libvirt.host.Host.get_domain') + def test_detach_volume_disk_not_found_encryption_err(self, mock_get_domain, + mock_disconnect_volume, + mock_get_encryptor): + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) + instance = objects.Instance(**self.test_instance) + mock_xml_without_disk = """ + + +""" + mock_dom = mock.MagicMock(return_value=mock_xml_without_disk) + encryption = {"provider": "NoOpEncryptor"} + mock_encryptor = mock.MagicMock(spec=encryptors.nop.NoOpEncryptor) + mock_encryptor.detach_volume = mock.MagicMock( + side_effect=processutils.ProcessExecutionError(exit_code=4) + ) + mock_get_encryptor.return_value = mock_encryptor + + connection_info = {"driver_volume_type": "fake", + "data": {"device_path": "/fake", + "access_mode": "rw"}} + + mock_dom.info.return_value = [power_state.RUNNING, 512, 512, 2, 1234, + 5678] + mock_get_domain.return_value = mock_dom + + drvr.detach_volume(connection_info, instance, '/dev/vdc', + encryption) + mock_get_encryptor.assert_called_once_with(connection_info, encryption) + mock_encryptor.detach_volume.assert_called_once_with(**encryption) + mock_disconnect_volume.assert_called_once_with( + connection_info, 'vdc', instance) + + @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') + @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') + @mock.patch('nova.virt.libvirt.host.Host.get_domain') + def test_detach_volume_disk_not_found_encryption_err_reraise( + self, mock_get_domain, mock_disconnect_volume, + mock_get_encryptor): + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) + instance = objects.Instance(**self.test_instance) + mock_xml_without_disk = """ + + +""" + mock_dom = mock.MagicMock(return_value=mock_xml_without_disk) + encryption = {"provider": "NoOpEncryptor"} + mock_encryptor = mock.MagicMock(spec=encryptors.nop.NoOpEncryptor) + # Any nonzero exit code other than 4 would work here. + mock_encryptor.detach_volume = mock.MagicMock( + side_effect=processutils.ProcessExecutionError(exit_code=1) + ) + mock_get_encryptor.return_value = mock_encryptor + + connection_info = {"driver_volume_type": "fake", + "data": {"device_path": "/fake", + "access_mode": "rw"}} + + mock_dom.info.return_value = [power_state.RUNNING, 512, 512, 2, 1234, + 5678] + mock_get_domain.return_value = mock_dom + + self.assertRaises(processutils.ProcessExecutionError, + drvr.detach_volume, connection_info, instance, + '/dev/vdc', encryption) + mock_get_encryptor.assert_called_once_with(connection_info, encryption) + mock_encryptor.detach_volume.assert_called_once_with(**encryption) + mock_disconnect_volume.assert_not_called() @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') @@ -8046,6 +8418,8 @@ is_shared_instance_path=False) with test.nested( mock.patch.object(os.path, 'getsize', mock_getsize), + mock.patch.object(libvirt_driver.disk_api, + 'get_allocated_disk_size', mock_getsize), mock.patch.object(host.Host, 'get_domain', mock_lookup)): self.assertFalse(drvr._is_shared_block_storage( instance, data, @@ -10513,9 +10887,14 @@ fake_libvirt_utils.disk_sizes['/test/disk.local'] = 20 * units.Gi fake_libvirt_utils.disk_backing_files['/test/disk.local'] = 'file' - self.mox.StubOutWithMock(os.path, "getsize") - os.path.getsize('/test/disk').AndReturn((10737418240)) - os.path.getsize('/test/disk.local').AndReturn((3328599655)) + self.mox.StubOutWithMock(libvirt_driver.disk_api, + 'get_allocated_disk_size') + path = '/test/disk' + size = 10737418240 + libvirt_driver.disk_api.get_allocated_disk_size(path).AndReturn((size)) + path = '/test/disk.local' + size = 3328599655 + libvirt_driver.disk_api.get_allocated_disk_size(path).AndReturn((size)) ret = ("image: /test/disk.local\n" "file format: qcow2\n" @@ -10623,9 +11002,14 @@ fake_libvirt_utils.disk_sizes['/test/disk.local'] = 20 * units.Gi fake_libvirt_utils.disk_backing_files['/test/disk.local'] = 'file' - self.mox.StubOutWithMock(os.path, "getsize") - os.path.getsize('/test/disk').AndReturn((10737418240)) - os.path.getsize('/test/disk.local').AndReturn((3328599655)) + self.mox.StubOutWithMock(libvirt_driver.disk_api, + 'get_allocated_disk_size') + path = '/test/disk' + size = 10737418240 + libvirt_driver.disk_api.get_allocated_disk_size(path).AndReturn((size)) + path = '/test/disk.local' + size = 3328599655 + libvirt_driver.disk_api.get_allocated_disk_size(path).AndReturn((size)) ret = ("image: /test/disk.local\n" "file format: qcow2\n" @@ -10689,8 +11073,11 @@ fake_libvirt_utils.disk_sizes['/test/disk'] = 10 * units.Gi - self.mox.StubOutWithMock(os.path, "getsize") - os.path.getsize('/test/disk').AndReturn((10737418240)) + self.mox.StubOutWithMock(libvirt_driver.disk_api, + "get_allocated_disk_size") + path = '/test/disk' + size = 10737418240 + libvirt_driver.disk_api.get_allocated_disk_size(path).AndReturn((size)) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) @@ -10699,8 +11086,8 @@ info = jsonutils.loads(info) self.assertEqual(1, len(info)) self.assertEqual(info[0]['type'], 'raw') - self.assertEqual(info[0]['path'], '/test/disk') - self.assertEqual(info[0]['disk_size'], 10737418240) + self.assertEqual(info[0]['path'], path) + self.assertEqual(info[0]['disk_size'], size) self.assertEqual(info[0]['backing_file'], "") self.assertEqual(info[0]['over_committed_disk_size'], 0) @@ -11169,6 +11556,14 @@ ] self.assertEqual(wantFiles, gotFiles) + def test_injection_info_is_sanitized(self): + info = get_injection_info( + network_info=mock.sentinel.network_info, + files=mock.sentinel.files, + admin_pass='verybadpass') + self.assertNotIn('verybadpass', str(info)) + self.assertNotIn('verybadpass', repr(info)) + @mock.patch( 'nova.virt.libvirt.driver.LibvirtDriver._build_device_metadata') @mock.patch('nova.api.metadata.base.InstanceMetadata') @@ -15155,6 +15550,26 @@ def test_cleanup_encryption_volume_detach_failed(self): self._test_cleanup_encryption_process_execution_error(not_found=False) + @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryption') + def test_swap_volume_native_luks_blocked(self, mock_get_encryption): + drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) + + # dest volume is encrypted + mock_get_encryption.side_effect = [{}, {'provider': 'luks'}] + self.assertRaises(NotImplementedError, drvr.swap_volume, self.context, + {}, {}, None, None, None) + + # src volume is encrypted + mock_get_encryption.side_effect = [{'provider': 'luks'}, {}] + self.assertRaises(NotImplementedError, drvr.swap_volume, self.context, + {}, {}, None, None, None) + + # both volumes are encrypted + mock_get_encryption.side_effect = [{'provider': 'luks'}, + {'provider': 'luks'}] + self.assertRaises(NotImplementedError, drvr.swap_volume, self.context, + {}, {}, None, None, None) + @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete', return_value=True) def _test_swap_volume(self, mock_is_job_complete, source_type, @@ -15284,8 +15699,8 @@ conf = mock.MagicMock(source_path='/fake-new-volume') get_volume_config.return_value = conf - conn.swap_volume(old_connection_info, new_connection_info, instance, - '/dev/vdb', 1) + conn.swap_volume(self.context, old_connection_info, + new_connection_info, instance, '/dev/vdb', 1) get_guest.assert_called_once_with(instance) connect_volume.assert_called_once_with(new_connection_info, disk_info, @@ -15304,6 +15719,7 @@ def test_swap_volume_driver_source_is_snapshot(self): self._test_swap_volume_driver(source_type='snapshot') + @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryption') @mock.patch('nova.virt.libvirt.guest.BlockDevice.rebase') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._connect_volume') @@ -15313,7 +15729,8 @@ @mock.patch('nova.virt.libvirt.host.Host.write_instance_config') def test_swap_volume_disconnect_new_volume_on_rebase_error(self, write_config, get_guest, get_disk, get_volume_config, - connect_volume, disconnect_volume, rebase): + connect_volume, disconnect_volume, rebase, + get_volume_encryption): """Assert that disconnect_volume is called for the new volume if an error is encountered while rebasing """ @@ -15321,12 +15738,13 @@ instance = objects.Instance(**self.test_instance) guest = libvirt_guest.Guest(mock.MagicMock()) get_guest.return_value = guest + get_volume_encryption.return_value = {} exc = fakelibvirt.make_libvirtError(fakelibvirt.libvirtError, 'internal error', error_code=fakelibvirt.VIR_ERR_INTERNAL_ERROR) rebase.side_effect = exc self.assertRaises(exception.VolumeRebaseFailed, conn.swap_volume, - mock.sentinel.old_connection_info, + self.context, mock.sentinel.old_connection_info, mock.sentinel.new_connection_info, instance, '/dev/vdb', 0) connect_volume.assert_called_once_with( @@ -15335,6 +15753,7 @@ disconnect_volume.assert_called_once_with( mock.sentinel.new_connection_info, 'vdb', instance) + @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryption') @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') @mock.patch('nova.virt.libvirt.guest.BlockDevice.abort_job') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') @@ -15345,7 +15764,8 @@ @mock.patch('nova.virt.libvirt.host.Host.write_instance_config') def test_swap_volume_disconnect_new_volume_on_pivot_error(self, write_config, get_guest, get_disk, get_volume_config, - connect_volume, disconnect_volume, abort_job, is_job_complete): + connect_volume, disconnect_volume, abort_job, is_job_complete, + get_volume_encryption): """Assert that disconnect_volume is called for the new volume if an error is encountered while pivoting to the new volume """ @@ -15353,13 +15773,14 @@ instance = objects.Instance(**self.test_instance) guest = libvirt_guest.Guest(mock.MagicMock()) get_guest.return_value = guest + get_volume_encryption.return_value = {} exc = fakelibvirt.make_libvirtError(fakelibvirt.libvirtError, 'internal error', error_code=fakelibvirt.VIR_ERR_INTERNAL_ERROR) is_job_complete.return_value = True abort_job.side_effect = [None, exc] self.assertRaises(exception.VolumeRebaseFailed, conn.swap_volume, - mock.sentinel.old_connection_info, + self.context, mock.sentinel.old_connection_info, mock.sentinel.new_connection_info, instance, '/dev/vdb', 0) connect_volume.assert_called_once_with( diff -Nru nova-16.1.0/nova/tests/unit/virt/test_block_device.py nova-16.1.2/nova/tests/unit/virt/test_block_device.py --- nova-16.1.0/nova/tests/unit/virt/test_block_device.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/virt/test_block_device.py 2018-04-25 09:22:34.000000000 +0000 @@ -222,6 +222,24 @@ self.blank_bdm = fake_block_device.fake_bdm_object( self.context, self.blank_bdm_dict) + @mock.patch('nova.virt.block_device.LOG') + @mock.patch('os_brick.encryptors') + def test_driver_detach_passes_failed(self, enc, log): + virt = mock.MagicMock() + virt.detach_volume.side_effect = exception.DeviceDetachFailed( + device='sda', reason='because testing') + driver_bdm = self.driver_classes['volume'](self.volume_bdm) + inst = mock.MagicMock(), + vol_api = mock.MagicMock() + + # Make sure we pass through DeviceDetachFailed, + # but don't log it as an exception, just a warning + self.assertRaises(exception.DeviceDetachFailed, + driver_bdm.driver_detach, + self.context, inst, vol_api, virt) + self.assertFalse(log.exception.called) + self.assertTrue(log.warning.called) + def test_no_device_raises(self): for name, cls in self.driver_classes.items(): bdm = fake_block_device.fake_bdm_object( @@ -1107,3 +1125,28 @@ # can't assert_not_called if the method isn't in the spec. self.assertFalse(hasattr(test_eph, 'refresh_connection_info')) self.assertFalse(hasattr(test_swap, 'refresh_connection_info')) + + +class TestGetVolumeId(test.NoDBTestCase): + + def test_get_volume_id_none_found(self): + self.assertIsNone(driver_block_device.get_volume_id(None)) + self.assertIsNone(driver_block_device.get_volume_id({})) + self.assertIsNone(driver_block_device.get_volume_id({'data': {}})) + + def test_get_volume_id_found_volume_id_no_serial(self): + self.assertEqual(uuids.volume_id, + driver_block_device.get_volume_id( + {'data': {'volume_id': uuids.volume_id}})) + + def test_get_volume_id_found_no_volume_id_serial(self): + self.assertEqual(uuids.serial, + driver_block_device.get_volume_id( + {'serial': uuids.serial})) + + def test_get_volume_id_found_both(self): + # volume_id is taken over serial + self.assertEqual(uuids.volume_id, + driver_block_device.get_volume_id( + {'serial': uuids.serial, + 'data': {'volume_id': uuids.volume_id}})) diff -Nru nova-16.1.0/nova/tests/unit/virt/test_virt_drivers.py nova-16.1.2/nova/tests/unit/virt/test_virt_drivers.py --- nova-16.1.0/nova/tests/unit/virt/test_virt_drivers.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/tests/unit/virt/test_virt_drivers.py 2018-04-25 09:22:34.000000000 +0000 @@ -492,7 +492,7 @@ instance_ref, '/dev/sda')) self.assertIsNone( - self.connection.swap_volume({'driver_volume_type': 'fake', + self.connection.swap_volume(None, {'driver_volume_type': 'fake', 'data': {}}, {'driver_volume_type': 'fake', 'data': {}}, diff -Nru nova-16.1.0/nova/virt/block_device.py nova-16.1.2/nova/virt/block_device.py --- nova-16.1.0/nova/virt/block_device.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/virt/block_device.py 2018-04-25 09:22:34.000000000 +0000 @@ -281,6 +281,10 @@ '%(mp)s : %(err)s', {'volume_id': volume_id, 'mp': mp, 'err': err}, instance=instance) + except exception.DeviceDetachFailed as err: + with excutils.save_and_reraise_exception(): + LOG.warning('Guest refused to detach volume %(vol)s', + {'vol': volume_id}, instance=instance) except Exception: with excutils.save_and_reraise_exception(): LOG.exception('Failed to detach volume ' @@ -683,3 +687,13 @@ return (bdm.source_type in ('image', 'volume', 'snapshot', 'blank') and bdm.destination_type == 'volume' and is_implemented(bdm)) + + +def get_volume_id(connection_info): + if connection_info: + # Check for volume_id in 'data' and if not there, fallback to + # the 'serial' that the DriverVolumeBlockDevice adds during attach. + volume_id = connection_info.get('data', {}).get('volume_id') + if not volume_id: + volume_id = connection_info.get('serial') + return volume_id diff -Nru nova-16.1.0/nova/virt/disk/api.py nova-16.1.2/nova/virt/disk/api.py --- nova-16.1.0/nova/virt/disk/api.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/virt/disk/api.py 2018-04-25 09:22:34.000000000 +0000 @@ -146,6 +146,16 @@ return images.qemu_img_info(path).virtual_size +def get_allocated_disk_size(path): + """Get the allocated size of a disk image + + :param path: Path to the disk image + :returns: Size (in bytes) of the given disk image as allocated on the + filesystem + """ + return images.qemu_img_info(path).disk_size + + def extend(image, size): """Increase image to size. diff -Nru nova-16.1.0/nova/virt/disk/vfs/guestfs.py nova-16.1.2/nova/virt/disk/vfs/guestfs.py --- nova-16.1.0/nova/virt/disk/vfs/guestfs.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/virt/disk/vfs/guestfs.py 2018-04-25 09:22:34.000000000 +0000 @@ -75,7 +75,14 @@ def inspect_capabilities(self): """Determines whether guestfs is well configured.""" try: - g = tpool.Proxy(guestfs.GuestFS()) + # If guestfs debug is enabled, we can't launch in a thread because + # the debug logging callback can make eventlet try to switch + # threads and then the launch hangs, causing eternal sadness. + if CONF.guestfs.debug: + LOG.debug('Inspecting guestfs capabilities non-threaded.') + g = guestfs.GuestFS() + else: + g = tpool.Proxy(guestfs.GuestFS()) g.add_drive("/dev/null") # sic g.launch() except Exception as e: diff -Nru nova-16.1.0/nova/virt/driver.py nova-16.1.2/nova/virt/driver.py --- nova-16.1.0/nova/virt/driver.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/virt/driver.py 2018-04-25 09:22:34.000000000 +0000 @@ -134,6 +134,11 @@ requires_allocation_refresh = False + # Indicates if this driver will rebalance nodes among compute service + # hosts. This is really here for ironic and should not be used by any + # other driver. + rebalances_nodes = False + def __init__(self, virtapi): self.virtapi = virtapi self._compute_event_callback = None @@ -458,10 +463,11 @@ """Detach the disk attached to the instance.""" raise NotImplementedError() - def swap_volume(self, old_connection_info, new_connection_info, + def swap_volume(self, context, old_connection_info, new_connection_info, instance, mountpoint, resize_to): """Replace the volume attached to the given `instance`. + :param context: The request context. :param dict old_connection_info: The volume for this connection gets detached from the given `instance`. diff -Nru nova-16.1.0/nova/virt/fake.py nova-16.1.2/nova/virt/fake.py --- nova-16.1.0/nova/virt/fake.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/virt/fake.py 2018-04-25 09:22:34.000000000 +0000 @@ -160,9 +160,12 @@ self._mounts = {} self._interfaces = {} self.active_migrations = {} + self._nodes = self._init_nodes() + + def _init_nodes(self): if not _FAKE_NODES: set_nodes([CONF.host]) - self._nodes = copy.copy(_FAKE_NODES) + return copy.copy(_FAKE_NODES) def init_host(self, host): return @@ -305,7 +308,7 @@ except KeyError: pass - def swap_volume(self, old_connection_info, new_connection_info, + def swap_volume(self, context, old_connection_info, new_connection_info, instance, mountpoint, resize_to): """Replace the disk attached to the instance.""" instance_name = instance.name diff -Nru nova-16.1.0/nova/virt/ironic/driver.py nova-16.1.2/nova/virt/ironic/driver.py --- nova-16.1.0/nova/virt/ironic/driver.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/virt/ironic/driver.py 2018-04-25 09:22:34.000000000 +0000 @@ -141,6 +141,9 @@ # migration has been completed. requires_allocation_refresh = True + # This driver is capable of rebalancing nodes between computes. + rebalances_nodes = True + def __init__(self, virtapi, read_only=False): super(IronicDriver, self).__init__(virtapi) global ironic @@ -754,7 +757,8 @@ # and DISK_GB resource classes in early Queens when Ironic nodes will # *always* return the custom resource class that represents the # baremetal node class in an atomic, singular unit. - if self._node_resources_unavailable(node): + if (not self._node_resources_used(node) and + self._node_resources_unavailable(node)): # TODO(dtantsur): report resources as reserved instead of reporting # an empty inventory LOG.debug('Node %(node)s is not ready for a deployment, ' diff -Nru nova-16.1.0/nova/virt/libvirt/driver.py nova-16.1.2/nova/virt/libvirt/driver.py --- nova-16.1.0/nova/virt/libvirt/driver.py 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/nova/virt/libvirt/driver.py 2018-04-25 09:22:34.000000000 +0000 @@ -49,6 +49,7 @@ from os_brick.initiator import connector from oslo_concurrency import processutils from oslo_log import log as logging +from oslo_serialization import base64 from oslo_serialization import jsonutils from oslo_service import loopingcall from oslo_utils import encodeutils @@ -62,6 +63,7 @@ from six.moves import range from nova.api.metadata import base as instance_metadata +from nova.api.metadata import password from nova import block_device from nova.compute import power_state from nova.compute import task_states @@ -70,6 +72,7 @@ from nova.console import serial as serial_console from nova.console import type as ctype from nova import context as nova_context +from nova import crypto from nova import exception from nova.i18n import _ from nova import image @@ -141,8 +144,14 @@ GuestNumaConfig = collections.namedtuple( 'GuestNumaConfig', ['cpuset', 'cputune', 'numaconfig', 'numatune']) -InjectionInfo = collections.namedtuple( - 'InjectionInfo', ['network_info', 'files', 'admin_pass']) + +class InjectionInfo(collections.namedtuple( + 'InjectionInfo', ['network_info', 'files', 'admin_pass'])): + __slots__ = () + + def __repr__(self): + return ('InjectionInfo(network_info=%r, files=%r, ' + 'admin_pass=)') % (self.network_info, self.files) libvirt_volume_drivers = [ 'iscsi=nova.virt.libvirt.volume.iscsi.LibvirtISCSIVolumeDriver', @@ -1209,6 +1218,16 @@ connection_info=connection_info, **encryption) + def _get_volume_encryption(self, context, connection_info): + """Get the encryption metadata dict if it is not provided + """ + encryption = {} + volume_id = driver_block_device.get_volume_id(connection_info) + if volume_id: + encryption = encryptors.get_encryption_metadata(context, + self._volume_api, volume_id, connection_info) + return encryption + def _check_discard_for_attach_volume(self, conf, instance): """Perform some checks for volumes configured for discard support. @@ -1344,9 +1363,19 @@ finally: self._host.write_instance_config(xml) - def swap_volume(self, old_connection_info, + def swap_volume(self, context, old_connection_info, new_connection_info, instance, mountpoint, resize_to): + # NOTE(lyarwood): Bug #1739593 uncovered a nasty data corruption + # issue that was fixed in Queens by Ica323b87fa85a454fca9d46ada3677f18. + # Given the size of the bugfix it was agreed not to backport the change + # to earlier stable branches and to instead block swap volume attempts. + if (self._get_volume_encryption(context, old_connection_info) or + self._get_volume_encryption(context, new_connection_info)): + raise NotImplementedError(_("Swap volume is not supported when " + "using encrypted volumes. For more details see " + "https://bugs.launchpad.net/nova/+bug/1739593.")) + guest = self._host.get_guest(instance) disk_dev = mountpoint.rpartition("/")[2] @@ -1413,11 +1442,6 @@ live=live) wait_for_detach() - if encryption: - encryptor = self._get_volume_encryptor(connection_info, - encryption) - encryptor.detach_volume(**encryption) - except exception.InstanceNotFound: # NOTE(zhaoqin): If the instance does not exist, _lookup_by_name() # will throw InstanceNotFound exception. Need to @@ -1425,7 +1449,11 @@ LOG.warning("During detach_volume, instance disappeared.", instance=instance) except exception.DeviceNotFound: - raise exception.DiskNotFound(location=disk_dev) + # We should still try to disconnect logical device from + # host, an error might have happened during a previous + # call. + LOG.info("Device %s not found in instance.", + disk_dev, instance=instance) except libvirt.libvirtError as ex: # NOTE(vish): This is called to cleanup volumes after live # migration, so we should still disconnect even if @@ -1438,6 +1466,26 @@ else: raise + try: + if encryption: + encryptor = self._get_volume_encryptor(connection_info, + encryption) + encryptor.detach_volume(**encryption) + except processutils.ProcessExecutionError as e: + # cryptsetup returns 4 when attempting to destroy a non-existent + # dm-crypt device. We assume here that the caller hasn't specified + # the wrong device, and that it doesn't exist because it has + # already been destroyed. + if e.exit_code == 4: + LOG.debug("Ignoring exit code 4, volume already destroyed") + else: + with excutils.save_and_reraise_exception(): + LOG.warning("Could not disconnect encrypted volume " + "%(volume)s. If dm-crypt device is still " + "active it will have to be destroyed manually " + "for cleanup to succeed.", + {'volume': disk_dev}) + self._disconnect_volume(connection_info, disk_dev, instance) def extend_volume(self, connection_info, instance): @@ -1825,6 +1873,17 @@ else: raise exception.SetAdminPasswdNotSupported() + # TODO(melwitt): Combine this with the similar xenapi code at some point. + def _save_instance_password_if_sshkey_present(self, instance, new_pass): + sshkey = instance.key_data if 'key_data' in instance else None + if sshkey and sshkey.startswith("ssh-rsa"): + enc = crypto.ssh_encrypt_text(sshkey, new_pass) + # NOTE(melwitt): The convert_password method doesn't actually do + # anything with the context argument, so we can pass None. + instance.system_metadata.update( + password.convert_password(None, base64.encode_as_text(enc))) + instance.save() + def set_admin_password(self, instance, new_pass): self._can_set_admin_password(instance.image_meta) @@ -1844,6 +1903,10 @@ '"%(user)s": [Error Code %(error_code)s] %(ex)s') % {'user': user, 'error_code': error_code, 'ex': err_msg}) raise exception.InternalError(msg) + else: + # Save the password in sysmeta so it may be retrieved from the + # metadata service. + self._save_instance_password_if_sshkey_present(instance, new_pass) def _can_quiesce(self, instance, image_meta): if CONF.libvirt.virt_type not in ('kvm', 'qemu'): @@ -3619,6 +3682,7 @@ def _get_guest_cpu_model_config(self): mode = CONF.libvirt.cpu_mode model = CONF.libvirt.cpu_model + extra_flags = CONF.libvirt.cpu_model_extra_flags if (CONF.libvirt.virt_type == "kvm" or CONF.libvirt.virt_type == "qemu"): @@ -3661,14 +3725,49 @@ msg = _("A CPU model name should not be set when a " "host CPU model is requested") raise exception.Invalid(msg) - - LOG.debug("CPU mode '%(mode)s' model '%(model)s' was chosen", - {'mode': mode, 'model': (model or "")}) + # FIXME (kchamart): We're intentionally restricting the choices + # (in the conf/libvirt.py) for 'extra_flags` to just 'PCID', to + # address the immediate guest performance degradation caused by + # "Meltdown" CVE fixes on certain Intel CPU models. In a future + # patch, we will: + # (a) Remove the restriction of choices for 'extra_flags', + # allowing to add / remove additional CPU flags, as it will + # make way for other useful features. + # (b) Remove the below check for "host-model", as it is a + # valid configuration to supply additional CPU flags to it. + # (c) Revisit and fix the warnings / exception handling for + # different combinations of CPU modes and 'extra_flags'. + elif ((mode == "host-model" or mode == "host-passthrough") and + extra_flags): + extra_flags = [] + LOG.warning("Setting extra CPU flags is only valid in " + "combination with a custom CPU model. Refer " + "to the 'nova.conf' documentation for " + "'[libvirt]/cpu_model_extra_flags'") + + LOG.debug("CPU mode '%(mode)s' model '%(model)s' was chosen, " + "with extra flags: '%(extra_flags)s'", + {'mode': mode, + 'model': (model or ""), + 'extra_flags': (extra_flags or "")}) cpu = vconfig.LibvirtConfigGuestCPU() cpu.mode = mode cpu.model = model + # NOTE (kchamart): Currently there's no existing way to ask if a + # given CPU model + CPU flags combination is supported by KVM & + # a specific QEMU binary. However, libvirt runs the 'CPUID' + # command upfront -- before even a Nova instance (a QEMU + # process) is launched -- to construct CPU models and check + # their validity; so we are good there. In the long-term, + # upstream libvirt intends to add an additional new API that can + # do fine-grained validation of a certain CPU model + CPU flags + # against a specific QEMU binary (the libvirt RFE bug for that: + # https://bugzilla.redhat.com/show_bug.cgi?id=1559832). + for flag in extra_flags: + cpu.add_feature(vconfig.LibvirtConfigGuestCPUFeature(flag)) + return cpu def _get_guest_cpu_config(self, flavor, image_meta, @@ -3706,7 +3805,7 @@ LOG.debug('Config drive not found in RBD, falling back to the ' 'instance directory', instance=instance) disk_info = disk_mapping[name] - if 'unit' in disk_mapping: + if 'unit' in disk_mapping and disk_info['bus'] == 'scsi': disk_unit = disk_mapping['unit'] disk_mapping['unit'] += 1 # Increments for the next disk added conf = disk.libvirt_info(disk_info['bus'], @@ -3745,7 +3844,13 @@ # use disk_mapping as container to keep reference of the # unit added and be able to increment it for each disk # added. + # + # NOTE(jaypipes,melwitt): If this is a boot-from-volume instance, + # we need to start the disk mapping unit at 1 since we set the + # bootable volume's unit to 0 for the bootable volume. disk_mapping['unit'] = 0 + if self._is_booted_from_volume(block_device_info): + disk_mapping['unit'] = 1 def _get_ephemeral_devices(): eph_devices = [] @@ -3835,8 +3940,14 @@ info = disk_mapping[vol_dev] self._connect_volume(connection_info, info, instance) if scsi_controller and scsi_controller.model == 'virtio-scsi': - info['unit'] = disk_mapping['unit'] - disk_mapping['unit'] += 1 + # Check if this is the bootable volume when in a + # boot-from-volume instance, and if so, ensure the unit + # attribute is 0. + if vol.get('boot_index') == 0: + info['unit'] = 0 + else: + info['unit'] = disk_mapping['unit'] + disk_mapping['unit'] += 1 cfg = self._get_volume_config(connection_info, info) devices.append(cfg) vol['connection_info'] = connection_info @@ -4222,8 +4333,9 @@ else: pin_cpuset.cpuset = host_cell.cpuset if emulator_threads_isolated: - emupcpus.extend( - object_numa_cell.cpuset_reserved) + if object_numa_cell.cpuset_reserved: + emupcpus.extend( + object_numa_cell.cpuset_reserved) elif not wants_realtime or cpu not in vcpus_rt: # - If realtime IS NOT enabled, the # emulator threads are allowed to float @@ -7273,7 +7385,7 @@ fp = os.path.join(dirpath, f) dk_size += os.path.getsize(fp) else: - dk_size = int(os.path.getsize(path)) + dk_size = disk_api.get_allocated_disk_size(path) elif disk_type == 'block' and block_device_info: dk_size = lvm.get_volume_size(path) else: diff -Nru nova-16.1.0/nova.egg-info/pbr.json nova-16.1.2/nova.egg-info/pbr.json --- nova-16.1.0/nova.egg-info/pbr.json 2018-02-15 23:58:24.000000000 +0000 +++ nova-16.1.2/nova.egg-info/pbr.json 2018-04-25 09:25:38.000000000 +0000 @@ -1 +1 @@ -{"git_version": "806eda3", "is_release": true} \ No newline at end of file +{"git_version": "0959362", "is_release": true} \ No newline at end of file diff -Nru nova-16.1.0/nova.egg-info/PKG-INFO nova-16.1.2/nova.egg-info/PKG-INFO --- nova-16.1.0/nova.egg-info/PKG-INFO 2018-02-15 23:58:24.000000000 +0000 +++ nova-16.1.2/nova.egg-info/PKG-INFO 2018-04-25 09:25:38.000000000 +0000 @@ -1,12 +1,11 @@ -Metadata-Version: 1.1 +Metadata-Version: 2.1 Name: nova -Version: 16.1.0 +Version: 16.1.2 Summary: Cloud computing fabric controller Home-page: https://docs.openstack.org/nova/latest/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN -Description-Content-Type: UNKNOWN Description: ======================== Team and repository tags ======================== @@ -94,3 +93,4 @@ Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 +Provides-Extra: osprofiler diff -Nru nova-16.1.0/nova.egg-info/SOURCES.txt nova-16.1.2/nova.egg-info/SOURCES.txt --- nova-16.1.0/nova.egg-info/SOURCES.txt 2018-02-15 23:58:26.000000000 +0000 +++ nova-16.1.2/nova.egg-info/SOURCES.txt 2018-04-25 09:25:41.000000000 +0000 @@ -2287,6 +2287,7 @@ nova/tests/functional/notification_sample_tests/test_service_update.py nova/tests/functional/regressions/README.rst nova/tests/functional/regressions/__init__.py +nova/tests/functional/regressions/test_bug_1404867.py nova/tests/functional/regressions/test_bug_1522536.py nova/tests/functional/regressions/test_bug_1541691.py nova/tests/functional/regressions/test_bug_1548980.py @@ -2307,8 +2308,11 @@ nova/tests/functional/regressions/test_bug_1702454.py nova/tests/functional/regressions/test_bug_1713783.py nova/tests/functional/regressions/test_bug_1718455.py +nova/tests/functional/regressions/test_bug_1718512.py nova/tests/functional/regressions/test_bug_1719730.py nova/tests/functional/regressions/test_bug_1732947.py +nova/tests/functional/regressions/test_bug_1746483.py +nova/tests/functional/regressions/test_bug_1746509.py nova/tests/functional/wsgi/__init__.py nova/tests/functional/wsgi/test_flavor_manage.py nova/tests/functional/wsgi/test_interfaces.py @@ -2504,6 +2508,7 @@ nova/tests/unit/api/openstack/compute/legacy_v2/extensions/__init__.py nova/tests/unit/api/openstack/compute/legacy_v2/extensions/foxinsocks.py nova/tests/unit/api/openstack/placement/__init__.py +nova/tests/unit/api/openstack/placement/test_deploy.py nova/tests/unit/api/openstack/placement/test_handler.py nova/tests/unit/api/openstack/placement/test_microversion.py nova/tests/unit/api/openstack/placement/test_requestlog.py @@ -3162,6 +3167,7 @@ releasenotes/notes/bug-1732976-doubled-allocations-rebuild-23e4d3b06eb4f43f.yaml releasenotes/notes/bug-1733886-os-quota-sets-force-2.36-5866924621ecc857.yaml releasenotes/notes/bug-1738094-request_specs.spec-migration-22d3421ea1536a37.yaml +releasenotes/notes/bug-1739593-cve-2017-18191-25fe48d336d8cf13.yaml releasenotes/notes/bug-1744325-rebuild-error-status-9e2da03f3f81fd6e.yaml releasenotes/notes/bug-hyperv-1629040-e1eb35a7b31d9af8.yaml releasenotes/notes/bug-volume-attach-policy-1635358-671ce4d4ee8c211b.yaml @@ -3231,6 +3237,7 @@ releasenotes/notes/deprecates-proxy-apis-5e11d7c4ae5227d2.yaml releasenotes/notes/disable_ec2_api_by_default-0ec0946433fc7119.yaml releasenotes/notes/disco_volume_libvirt_driver-916428b8bd852732.yaml +releasenotes/notes/discover-hosts-by-service-06ee20365b895127.yaml releasenotes/notes/discover-hosts-periodic-is-more-efficient-6c55b606a7831750.yaml releasenotes/notes/disk-weight-scheduler-98647f9c6317d21d.yaml releasenotes/notes/disk_ratio_to_rt-b6224ab8c0272d86.yaml @@ -3275,6 +3282,7 @@ releasenotes/notes/keypairs-moved-to-api-9cde30acac6f76b6.yaml releasenotes/notes/known-issue-on-api-1efca45440136f3e.yaml releasenotes/notes/libvirt-change-default-value-of-live-migration-tunnelled-4248cf76df605fdf.yaml +releasenotes/notes/libvirt-cpu-model-extra-flags-a23085f58bd22d27.yaml releasenotes/notes/libvirt-deprecate-migration-flags-config-4ba1e2d6c9ef09ff.yaml releasenotes/notes/libvirt-firewall-ignore-use_ipv6-c555f95799f991fd.yaml releasenotes/notes/libvirt-ignore-allow_same_net_traffic-fd88bb2801b81561.yaml @@ -3456,6 +3464,7 @@ releasenotes/notes/sync_power_state_pool_size-81d2d142bffa055b.yaml releasenotes/notes/trim-default-sched-filters-e70de3bb4c7b1a1b.yaml releasenotes/notes/unsettable-keymap-settings-fa831c02e4158507.yaml +releasenotes/notes/update-swap-decorator-7622a265df55feaa.yaml releasenotes/notes/upgrade_rootwrap_compute_filters-428ca239f2e4e63d.yaml releasenotes/notes/user-settable-server-description-89dcfc75677e31bc.yaml releasenotes/notes/v21enable-8454d6eca3ec604f.yaml diff -Nru nova-16.1.0/PKG-INFO nova-16.1.2/PKG-INFO --- nova-16.1.0/PKG-INFO 2018-02-15 23:58:26.000000000 +0000 +++ nova-16.1.2/PKG-INFO 2018-04-25 09:25:41.000000000 +0000 @@ -1,12 +1,11 @@ -Metadata-Version: 1.1 +Metadata-Version: 2.1 Name: nova -Version: 16.1.0 +Version: 16.1.2 Summary: Cloud computing fabric controller Home-page: https://docs.openstack.org/nova/latest/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN -Description-Content-Type: UNKNOWN Description: ======================== Team and repository tags ======================== @@ -94,3 +93,4 @@ Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 +Provides-Extra: osprofiler diff -Nru nova-16.1.0/releasenotes/notes/bug-1739593-cve-2017-18191-25fe48d336d8cf13.yaml nova-16.1.2/releasenotes/notes/bug-1739593-cve-2017-18191-25fe48d336d8cf13.yaml --- nova-16.1.0/releasenotes/notes/bug-1739593-cve-2017-18191-25fe48d336d8cf13.yaml 1970-01-01 00:00:00.000000000 +0000 +++ nova-16.1.2/releasenotes/notes/bug-1739593-cve-2017-18191-25fe48d336d8cf13.yaml 2018-04-25 09:22:34.000000000 +0000 @@ -0,0 +1,9 @@ +--- +prelude: > + This release includes fixes for security vulnerabilities. +security: + - | + [CVE-2017-18191] Swapping encrypted volumes can lead to data loss and a + possible compute host DOS attack. + + * `Bug 1739593 `_ diff -Nru nova-16.1.0/releasenotes/notes/discover-hosts-by-service-06ee20365b895127.yaml nova-16.1.2/releasenotes/notes/discover-hosts-by-service-06ee20365b895127.yaml --- nova-16.1.0/releasenotes/notes/discover-hosts-by-service-06ee20365b895127.yaml 1970-01-01 00:00:00.000000000 +0000 +++ nova-16.1.2/releasenotes/notes/discover-hosts-by-service-06ee20365b895127.yaml 2018-04-25 09:22:34.000000000 +0000 @@ -0,0 +1,8 @@ +--- +fixes: + - | + The nova-manage discover_hosts command now has a ``--by-service`` option which + allows discovering hosts in a cell purely by the presence of a nova-compute + binary. At this point, there is no need to use this unless you're using ironic, + as it is less efficient. However, if you are using ironic, this allows discovery + and mapping of hosts even when no ironic nodes are present. \ No newline at end of file diff -Nru nova-16.1.0/releasenotes/notes/libvirt-cpu-model-extra-flags-a23085f58bd22d27.yaml nova-16.1.2/releasenotes/notes/libvirt-cpu-model-extra-flags-a23085f58bd22d27.yaml --- nova-16.1.0/releasenotes/notes/libvirt-cpu-model-extra-flags-a23085f58bd22d27.yaml 1970-01-01 00:00:00.000000000 +0000 +++ nova-16.1.2/releasenotes/notes/libvirt-cpu-model-extra-flags-a23085f58bd22d27.yaml 2018-04-25 09:22:34.000000000 +0000 @@ -0,0 +1,21 @@ +--- +fixes: + - | + The libvirt driver now allows specifying individual CPU feature + flags for guests, via a new configuration attribute + ``[libvirt]/cpu_model_extra_flags`` -- only with ``custom`` as the + ``[libvirt]/cpu_model``. Refer to its documentation in + ``nova.conf`` for usage details. + + One of the motivations for this is to alleviate the performance + degradation (caused as a result of applying the "Meltdown" CVE + fixes) for guests running with certain Intel-based virtual CPU + models. This guest performance impact is reduced by exposing the + CPU feature flag 'PCID' ("Process-Context ID") to the *guest* CPU, + assuming that it is available in the physical hardware itself. + + Note that besides ``custom``, Nova's libvirt driver has two other + CPU modes: ``host-model`` (which is the default), and + ``host-passthrough``. Refer to the + ``[libvirt]/cpu_model_extra_flags`` documentation for what to do + when you are using either of those CPU modes in context of 'PCID'. diff -Nru nova-16.1.0/releasenotes/notes/update-swap-decorator-7622a265df55feaa.yaml nova-16.1.2/releasenotes/notes/update-swap-decorator-7622a265df55feaa.yaml --- nova-16.1.0/releasenotes/notes/update-swap-decorator-7622a265df55feaa.yaml 1970-01-01 00:00:00.000000000 +0000 +++ nova-16.1.2/releasenotes/notes/update-swap-decorator-7622a265df55feaa.yaml 2018-04-25 09:22:25.000000000 +0000 @@ -0,0 +1,6 @@ +--- +fixes: + - prevent swap_volume action if the instance is in state SUSPENDED, + STOPPED or SOFT_DELETED. A conflict (409) will be raised now as + previously it used to fail silently. + diff -Nru nova-16.1.0/tox.ini nova-16.1.2/tox.ini --- nova-16.1.0/tox.ini 2018-02-15 23:54:53.000000000 +0000 +++ nova-16.1.2/tox.ini 2018-04-25 09:22:34.000000000 +0000 @@ -26,13 +26,11 @@ [testenv:py27] commands = {[testenv]commands} - env TEST_OSPROFILER=1 ostestr --regex 'nova.tests.unit.test_profiler' ostestr '{posargs}' [testenv:py35] commands = {[testenv]commands} - env TEST_OSPROFILER=1 ostestr --regex 'nova.tests.unit.test_profiler' bash tools/pretty_tox3.sh '{posargs}' [testenv:pep8]