nova-compute 23.2.2-0ubuntu1~cloud2 unable to detach volumes

Bug #2019460 reported by Zakhar Kirpichenko
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned
Ubuntu Cloud Archive
Invalid
Undecided
Unassigned
Victoria
Fix Released
Critical
Unassigned
Wallaby
Fix Released
Critical
Unassigned
nova (Ubuntu)
Invalid
Undecided
Unassigned
Focal
Fix Released
Critical
Unassigned

Bug Description

The following packages were updated on Wallaby compute nodes to fix https://security.openstack.org/ossa/OSSA-2023-003.html:

python3-nova:amd64 (3:23.2.2-0ubuntu1~cloud1, 3:23.2.2-0ubuntu1~cloud2),
python3-os-brick:amd64 (4.3.3-0ubuntu1~cloud0, 4.3.3-0ubuntu1~cloud1),
nova-compute-libvirt:amd64 (3:23.2.2-0ubuntu1~cloud1, 3:23.2.2-0ubuntu1~cloud2),
nova-common:amd64 (3:23.2.2-0ubuntu1~cloud1, 3:23.2.2-0ubuntu1~cloud2),
os-brick-common:amd64 (4.3.3-0ubuntu1~cloud0, 4.3.3-0ubuntu1~cloud1),
nova-compute-kvm:amd64 (3:23.2.2-0ubuntu1~cloud1, 3:23.2.2-0ubuntu1~cloud2),
nova-compute:amd64 (3:23.2.2-0ubuntu1~cloud1, 3:23.2.2-0ubuntu1~cloud2)

nova-compute is now unable to detach volumes from instances:

2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server [req-470d3e0e-e59c-40c5-9597-6649c08add16 046191f8ebfd4695b3387a5ead3a9a55 85945271df8b4a6f9d37c37e4e52958d - default default] Exception during message handling: TypeError: disconnect_volume() got an unexpected keyword argument 'force'
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 71, in wrapped
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server _emit_versioned_exception_notification(
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server self.force_reraise()
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server raise self.value
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 63, in wrapped
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/utils.py", line 1434, in decorated_function
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 211, in decorated_function
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context,
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server self.force_reraise()
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server raise self.value
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 200, in decorated_function
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 7195, in detach_volume
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server do_detach_volume(context, volume_id, instance, attachment_id)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py", line 360, in inner
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server return f(*args, **kwargs)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 7192, in do_detach_volume
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server self._detach_volume(context, bdm, instance,
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 7143, in _detach_volume
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server driver_bdm.detach(context, instance, self.volume_api, self.driver,
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/block_device.py", line 476, in detach
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server self._do_detach(context, instance, volume_api, virt_driver,
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/block_device.py", line 408, in _do_detach
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server self.driver_detach(context, instance, volume_api, virt_driver)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/block_device.py", line 347, in driver_detach
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server volume_api.roll_detaching(context, volume_id)
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server self.force_reraise()
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server raise self.value
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/block_device.py", line 328, in driver_detach
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server virt_driver.detach_volume(context, connection_info, instance, mp,
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 2592, in detach_volume
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server self._disconnect_volume(context, connection_info, instance,
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1862, in _disconnect_volume
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server vol_driver.disconnect_volume(
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server TypeError: disconnect_volume() got an unexpected keyword argument 'force'
2023-05-13 05:53:00.128 3219193 ERROR oslo_messaging.rpc.server

Looks like it doesn't know about the "force" keyword that's being passed. This breaks basic volume functionality.

CVE References

Revision history for this message
Corey Bryant (corey.bryant) wrote :

It looks like some volume drivers that exist prior to xena are missing the force parameter. I'll get a fix going for this asap.

Changed in cloud-archive:
status: New → Invalid
Changed in nova (Ubuntu Focal):
status: New → Triaged
importance: Undecided → Critical
Changed in nova (Ubuntu):
status: New → Invalid
Changed in nova:
status: New → Invalid
Revision history for this message
Corey Bryant (corey.bryant) wrote : Please test proposed package

Hello Zakhar, or anyone else affected,

Accepted nova into victoria-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:victoria-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-victoria-needed to verification-victoria-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-victoria-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-victoria-needed
Revision history for this message
Corey Bryant (corey.bryant) wrote :

Hello Zakhar, or anyone else affected,

Accepted nova into wallaby-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.

Please help us by testing this new package. To enable the -proposed repository:

  sudo add-apt-repository cloud-archive:wallaby-proposed
  sudo apt-get update

Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-wallaby-needed to verification-wallaby-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-wallaby-failed. In either case, details of your testing will help us make a better decision.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!

tags: added: verification-wallaby-needed
no longer affects: cloud-archive/ussuri
Revision history for this message
Corey Bryant (corey.bryant) wrote : Update Released

The verification of the Stable Release Update for nova has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

This bug was fixed in the package nova - 3:23.2.2-0ubuntu1~cloud3
---------------

 nova (3:23.2.2-0ubuntu1~cloud3) focal-wallaby; urgency=medium
 .
   * SECURITY REGRESSION: Regression with volume drivers (LP: #2019460)
     - debian/patches/CVE-2023-2088.patch: Updated to add missing force
       parameter to various volume drivers.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

The verification of the Stable Release Update for nova has completed successfully and the package has now been released to -updates. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Corey Bryant (corey.bryant) wrote :

This bug was fixed in the package nova - 2:22.4.0-0ubuntu1~cloud3
---------------

 nova (2:22.4.0-0ubuntu1~cloud3) focal-victoria; urgency=medium
 .
   * SECURITY REGRESSION: Regression with volume drivers (LP: #2019460)
     - debian/patches/CVE-2023-2088.patch: Updated to add missing force
       parameter to various volume drivers.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package nova - 2:21.2.4-0ubuntu2.4

---------------
nova (2:21.2.4-0ubuntu2.4) focal-security; urgency=medium

  * SECURITY REGRESSION: Regression with volume drivers (LP: #2019460)
    - debian/patches/CVE-2023-2088.patch: Updated to add missing force
      parameter to various volume drivers.

 -- Corey Bryant <email address hidden> Sat, 13 May 2023 09:56:20 -0400

Changed in nova (Ubuntu Focal):
status: Triaged → Fix Released
Revision history for this message
Zakhar Kirpichenko (kzakhar) wrote :

I apologize for the late response, Corey! Many thanks for your feedback and the fix. I'm going to test it in a staging environment, unfortunately this will take some time due to other commitments.

I saw an update for Nova drivers here: https://bugs.launchpad.net/nova/wallaby/+bug/2004555/comments/224, is this fix similar and/or compatible?

Revision history for this message
Corey Bryant (corey.bryant) wrote :

Thanks for the pointer. I think we need to pick up these additional patches that weren't part of the embargo notice.

Revision history for this message
Zakhar Kirpichenko (kzakhar) wrote :

Thanks, I'll hold our testing for now then.

Revision history for this message
Zakhar Kirpichenko (kzakhar) wrote :

I understand that the changes leading to this bug have been rolled back https://bugs.launchpad.net/ubuntu/+source/nova/+bug/2020111

Revision history for this message
Chuan Li (lccn) wrote :

I also encountered the issue of not being able to detach the volume. At first, I thought I had reproduced this problem, but it turned out not to be the case.

The symptom is that, launch a VM boot from a ceph volume, the volume will be deleted when delete the VM. When I delete the instance, the volume is still in "in-use" state.

nova-compute.log

ERROR nova.volume.cinder Error: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed-22f9-4286-81ba-2b543d089b41) Code: 406: cinderclient.exceptions.NotAcceptable: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed-22f9-4286-81ba-2b543d089b41)

WARNING nova.compute.manager [instance: 04544489-dfd2-4c0c-b8c8-a07acbee2b58] Ignoring unknown cinder exception for volume d0a58ecc-e63a-49e4-8785-fb34d113e0f2: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed-22f9-4286-81ba-2b543d089b41): cinderclient.exceptions.NotAcceptable: The server could not comply with the request since it is either malformed or otherwise incorrect. (HTTP 406) (Request-ID: req-759ad2ed-22f9-4286-81ba-2b543d089b41)

apache log

"DELETE /v3/c1524133a19945fc9f59708819277bc9/attachments/aa8bc30a-b22b-409c-8472-351b73fdd1ea HTTP/1.1" 406 5462 "-" "python-cinderclient"

Regarding this issue, I conducted more tests.

Test 1:

deploy a env with focal+ victoria
cinder pkg version in cinder-ceph units: 2:17.4.0-0ubuntu1~cloud4
nova pkg version in nova-compute units: 2:22.4.0-0ubuntu1~cloud4

result: I can NOT reproduce the issue

Test 2:

deploy a env with focal+ victoria
2 nova-compute units are with different nova pkg versions, one is with 2:22.4.0-0ubuntu1~cloud4, the other is with 2:22.4.0-0ubuntu1~cloud3
cinder pkg version in cinder-ceph units: 2:17.4.0-0ubuntu1~cloud3
Launch a VM on each nova-compute node.

Result: I can reproduce the issue for both VMs

Test 3:

deploy a env with focal+ victoria
2 nova compute with different nova pkg versions, one is with 2:22.4.0-0ubuntu1~cloud4, the other is with 2:22.4.0-0ubuntu1~cloud3
cinder pkg version in cinder-ceph units: 2:17.4.0-0ubuntu1~cloud4
Launch a VM on each nova-compute node.

Result: I can NOT reproduce the issue for both VMs

Thus, my conclusion is that the issue is caused by the Cinder version 2:17.4.0-0ubuntu1~cloud3 and has nothing to do with the Nova version.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.