Version: 2012.1~e4~20120217.12709-0ubuntu1
I attached a volume to an instance via iSCSI, shut down the instance, and then attempted to detach the volume. The result is the following in nova-compute.log, and the volume remains "in-use". I also tried "euca-detach-volume --force" with the same result.
VOLUME vol-00000009 10 nova in-use (pjdc_project, zucchini, i-00001a05[ankaa], /dev/vdc) 2012-03-13T20:49:28Z
2012-03-14 03:04:08,486 DEBUG nova.rpc.common [-] received {u'_context_roles': [u'cloudadmin', u'netadmin', u'projectmanager', u'admin'], u'_context_request_id': u'req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3', u'_context_read_deleted': u'no', u'args': {u'instance_uuid': u'f7620968-686d-4a3d-a1b3-2d0881e1656d', u'volume_id': 9}, u'_context_auth_token': None, u'_context_strategy': u'noauth', u'_context_is_admin': True, u'_context_project_id': u'pjdc_project', u'_context_timestamp': u'2012-03-14T03:03:59.303517', u'_context_user_id': u'pjdc', u'method': u'detach_volume', u'_context_remote_address': u'XXX.XXX.XXX.XXX'} from (pid=8590) _safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:144
2012-03-14 03:04:08,487 DEBUG nova.rpc.common [req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3 pjdc pjdc_project] unpacked context: {'request_id': u'req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3', 'user_id': u'pjdc', 'roles': [u'cloudadmin', u'netadmin', u'projectmanager', u'admin'], 'timestamp': '2012-03-14T03:03:59.303517', 'is_admin': True, 'auth_token': None, 'project_id': u'pjdc_project', 'remote_address': u'XXX.XXX.XXX.XXX', 'read_deleted': u'no', 'strategy': u'noauth'} from (pid=8590) unpack_context /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:186
2012-03-14 03:04:08,515 INFO nova.compute.manager [req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3 pjdc pjdc_project] check_instance_lock: decorating: |<function detach_volume at 0x1f591b8>|
2012-03-14 03:04:08,516 INFO nova.compute.manager [req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3 pjdc pjdc_project] check_instance_lock: arguments: |<nova.compute.manager.ComputeManager object at 0x1c6b1d0>| |<nova.rpc.amqp.RpcContext object at 0x4987a50>| |f7620968-686d-4a3d-a1b3-2d0881e1656d|
2012-03-14 03:04:08,516 DEBUG nova.compute.manager [req-da8a3819-70b2-4d0b-a9fb-0dfefa85f9f3 pjdc pjdc_project] instance f7620968-686d-4a3d-a1b3-2d0881e1656d: getting locked state from (pid=8590) get_lock /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1508
2012-03-14 03:04:08,668 ERROR nova.rpc.common [-] Exception during message handling
(nova.rpc.common): TRACE: Traceback (most recent call last):
(nova.rpc.common): TRACE: File "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 250, in _process_data
(nova.rpc.common): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc.common): TRACE: File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 112, in wrapped
(nova.rpc.common): TRACE: return f(*args, **kw)
(nova.rpc.common): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 139, in decorated_function
(nova.rpc.common): TRACE: locked = self.get_lock(context, instance_uuid)
(nova.rpc.common): TRACE: File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 112, in wrapped
(nova.rpc.common): TRACE: return f(*args, **kw)
(nova.rpc.common): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 168, in decorated_function
(nova.rpc.common): TRACE: return function(self, context, instance_uuid, *args, **kwargs)
(nova.rpc.common): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1509, in get_lock
(nova.rpc.common): TRACE: instance_ref = self.db.instance_get_by_uuid(context, instance_uuid)
(nova.rpc.common): TRACE: File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 586, in instance_get_by_uuid
(nova.rpc.common): TRACE: return IMPL.instance_get_by_uuid(context, uuid)
(nova.rpc.common): TRACE: File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 119, in wrapper
(nova.rpc.common): TRACE: return f(*args, **kwargs)
(nova.rpc.common): TRACE: File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 1452, in instance_get_by_uuid
(nova.rpc.common): TRACE: raise exception.InstanceNotFound(instance_id=uuid)
(nova.rpc.common): TRACE: InstanceNotFound: Instance f7620968-686d-4a3d-a1b3-2d0881e1656d could not be found.
(nova.rpc.common): TRACE:
Steps to reproduce:
1. create volume
2. boot an instance (tested with cirros)
3. attach volume
4. in instance: 'sudo poweroff'
5. after kvm machine has stopped on compute node, terminate instance. results in traceback on compute.log:
(nova.rpc.amqp): TRACE: File "/usr/lib/ python2. 7/dist- packages/ nova/compute/ manager. py", line 719, in _delete_instance instance( context, instance, 'Terminating') python2. 7/dist- packages/ nova/compute/ manager. py", line 681, in _shutdown_instance Invalid( _msg % instance_uuid) 65ee-46e0- 8c07-5aae49a021 3c
(nova.rpc.amqp): TRACE: self._shutdown_
(nova.rpc.amqp): TRACE: File "/usr/lib/
(nova.rpc.amqp): TRACE: raise exception.
(nova.rpc.amqp): TRACE: Invalid: trying to destroy already destroyed instance: b929bf81-
6. instance is destroyed, volume is still 'in-use'
7. detach volume, results in traceback in original bug report