Below is the list of tempest tests failing on full-tempest-api CS9 job[1]
```
{3} tempest.api.compute.volumes.test_attach_volume.AttachVolumeShelveTestJSON.test_attach_volume_shelved_or_offload_server [706.429983s] ... FAILED
Captured traceback:
~~~~~~~~~~~~~~~~~~~
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 380, in wait_for_volume_attachment_remove_from_server
raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: Volume c3a22d9b-f4ca-476a-a0d4-7c0c771c1f29 failed to detach from server e4fb37d5-9395-40da-b576-3dd132244fea within the required time (300 s) from the compute API perspective
Captured traceback-1:
~~~~~~~~~~~~~~~~~~~~~
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/tempest/common/waiters.py", line 312, in wait_for_volume_resource_status
raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: volume c3a22d9b-f4ca-476a-a0d4-7c0c771c1f29 failed to reach available status (current in-use) within the required time (300 s).
```
Below is the list of other tests failing with same error:
* tempest.api.compute.servers.test_server_rescue_negative.ServerRescueNegativeTestJSON.test_rescued_vm_detach_volume
* tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_rebuild_server_with_volume_attached
* tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume
By taking a look at e4fb37d5-9395-40da-b576-3dd132244fea logs from nova-compute logs [2]
```
2022-01-10 14:34:44.109 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR nova.virt.libvirt.driver [req-e9c9d538-c1d3-435c-8cde-ef2c7181d542 87afcc7329474565ba1a94e10afb94f9 41e8c8794a2d4deaac0b3c374fe282e8 - default default] Waiting for libvirt event about the detach of device vdc with device alias virtio-disk2 from instance e4fb37d5-9395-40da-b576-3dd132244fea is timed out.
2022-01-10 14:34:44.113 ERROR /var/log/containers/nova/nova-compute.log.1: 2 ERROR nova.virt.libvirt.driver [req-e9c9d538-c1d3-435c-8cde-ef2c7181d542 87afcc7329474565ba1a94e10afb94f9 41e8c8794a2d4deaac0b3c374fe282e8 - default default] Run out of retry while detaching device vdc with device alias virtio-disk2 from instance e4fb37d5-9395-40da-b576-3dd132244fea from the live domain config. Device is still attached to the guest.
```
Currently moving these tests to skiplist till we investigate.
Note these tests are passing on CS8.https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-8-standalone-full-tempest-api-master
Logs:
[1]. https://logserver.rdoproject.org/openstack-periodic-integration-main-cs9/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-master/1dc7580/logs/undercloud/var/log/tempest/tempest_run.log.txt.gz
[2]. https://logserver.rdoproject.org/openstack-periodic-integration-main-cs9/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-9-standalone-full-tempest-api-master/1dc7580/logs/undercloud/var/log/extra/errors.txt.gz
Not sure this info will help
passing job agent-6. 1.0-8.el9. x86_64 6.1.0-8. el9.x86_ 64
++++++++++
qemu-guest-
qemu-img-
failed job agent-6. 2.0-1.el9. x86_64 6.2.0-1. el9.x86_ 64
++++++++++
qemu-guest-
qemu-img-
in CentOS-8 passing job it is qemu-img- 6.0.0-33. el8s.x86_ 64