Nova backtrace in log when attaching a volume at an already existing drive

Bug #914974 reported by David Kranz
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
High
Brian Waldon

Bug Description

I was using devstack on trunk. I attached a volume at /dev/sdb to a vm using nova client but that device already existed. The nova command returned without error but obviously did not do anything and the following showed up in the nova logs:

Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,164 ERROR nova.compute.manager [aa780ef5-4591-4d5f-b113-9ae63568647b demo 2] insta\
nce 3d37d21e-8fba-46a4-a0e1-d137f1d17597: attach failed /dev/vdb, removing#012(nova.compute.manager): TRACE: Traceback (most rec\
ent call last):#012(nova.compute.manager): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 1529, in attach_volume#\
012(nova.compute.manager): TRACE: mountpoint)#012(nova.compute.manager): TRACE: File "/opt/stack/nova/nova/exception.py", \
line 130, in wrapped#012(nova.compute.manager): TRACE: return f(*args, **kw)#012(nova.compute.manager): TRACE: File "/opt/\
stack/nova/nova/virt/libvirt/connection.py", line 428, in attach_volume#012(nova.compute.manager): TRACE: virt_dom.attachDev\
ice(xml)#012(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/libvirt.py", line 298, in attachDevice#012(n\
ova.compute.manager): TRACE: if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)#012(nova.compute.\
manager): TRACE: libvirtError: operation failed: target vdb already exists#012(nova.compute.manager): TRACE:
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,194 DEBUG nova.rpc [-] Making asynchronous call on volume.xg06 ... from (pid=8603)\
 multicall /opt/stack/nova/nova/rpc/impl_kombu.py:759
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,195 DEBUG nova.rpc [-] MSG_ID is 6735676270c241c4859384769641ca56 from (pid=8603) \
multicall /opt/stack/nova/nova/rpc/impl_kombu.py:762
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,202 DEBUG nova.rpc [-] received {u'_context_roles': [u'Member', u'sysadmin', u'net\
admin'], u'_msg_id': u'6735676270c241c4859384769641ca56', u'_context_read_deleted': u'no', u'_context_request_id': u'aa780ef5-45\
91-4d5f-b113-9ae63568647b', u'args': {u'volume_id': 1, u'address': u'172.18.0.146'}, u'_context_auth_token': u'07302338-e85e-49c\
f-a50c-c230a30ad637', u'_context_strategy': u'keystone', u'_context_is_admin': True, u'_context_project_id': u'2', u'_context_ti\
mestamp': u'2012-01-11T19:14:42.636546', u'_context_user_id': u'demo', u'method': u'terminate_connection', u'_context_remote_add\
ress': u'172.18.0.146'} from (pid=8709) __call__ /opt/stack/nova/nova/rpc/impl_kombu.py:629
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,202 DEBUG nova.rpc [-] unpacked context: {'user_id': u'demo', 'roles': [u'Member',\
 u'sysadmin', u'netadmin'], 'timestamp': u'2012-01-11T19:14:42.636546', 'auth_token': u'07302338-e85e-49cf-a50c-c230a30ad637', '\
msg_id': u'6735676270c241c4859384769641ca56', 'remote_address': u'172.18.0.146', 'strategy': u'keystone', 'is_admin': True, 'req\
uest_id': u'aa780ef5-4591-4d5f-b113-9ae63568647b', 'project_id': u'2', 'read_deleted': u'no'} from (pid=8709) _unpack_context /o\
pt/stack/nova/nova/rpc/impl_kombu.py:675
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,245 ERROR nova.rpc [-] Exception during message handling#012(nova.rpc): TRACE: Trac\
eback (most recent call last):#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/rpc/impl_kombu.py", line 649, in _process_data\
#012(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/except\
ion.py", line 130, in wrapped#012(nova.rpc): TRACE: return f(*args, **kw)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova\
/compute/manager.py", line 126, in decorated_function#012(nova.rpc): TRACE: function(self, context, instance_uuid, *args, **\
kwargs)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 149, in decorated_function#012(nova.rpc): T\
RACE: self.add_instance_fault_from_exc(context, instance_uuid, e)#012(nova.rpc): TRACE: File "/usr/lib/python2.7/contextli\
b.py", line 24, in __exit__#012(nova.rpc): TRACE: self.gen.next()#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/compute\
/manager.py", line 144, in decorated_function#012(nova.rpc): TRACE: return function(self, context, instance_uuid, *args, **k\
wargs)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 1536, in attach_volume#012(nova.rpc): TRACE:\
     address)#012(nova.rpc): TRACE: File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__#012(nova.rpc): TRACE: se\
lf.gen.next()#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 1529, in attach_volume#012(nova.rpc):\
 TRACE: mountpoint)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/exception.py", line 130, in wrapped#012(nova.rpc): TR\
ACE: return f(*args, **kw)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/virt/libvirt/connection.py", line 428, in atta\
ch_volume#012(nova.rpc): TRACE: virt_dom.attachDevice(xml)#012(nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/li\
bvirt.py", line 298, in attachDevice#012(nova.rpc): TRACE: if ret == -1: raise libvirtError ('virDomainAttachDevice() failed\
', dom=self)#012(n

Revision history for this message
David Kranz (david-kranz) wrote :
Download full text (10.2 KiB)

I was also not able to attach to vdc which was not already there. There were no errors while devstack was doing the nova volume stuff.

Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,190 AUDIT nova.api.openstack.v2.contrib.volumes [229ab0dc-3085-4f4c-bb80-d8104c96b\
ef8 demo 2] Attach volume 1 to instance 3d37d21e-8fba-46a4-a0e1-d137f1d17597 at /dev/vdc
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,265 DEBUG nova.rpc [229ab0dc-3085-4f4c-bb80-d8104c96bef8 demo 2] Making asynchrono\
us cast on compute.xg06... from (pid=8466) cast /opt/stack/nova/nova/rpc/impl_kombu.py:784
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,268 DEBUG nova.rpc [-] received {u'_context_roles': [u'Member', u'sysadmin', u'net\
admin'], u'_context_request_id': u'229ab0dc-3085-4f4c-bb80-d8104c96bef8', u'_context_read_deleted': u'no', u'args': {u'instance_\
uuid': u'3d37d21e-8fba-46a4-a0e1-d137f1d17597', u'mountpoint': u'/dev/vdc', u'volume_id': 1}, u'_context_auth_token': u'07302338\
-e85e-49cf-a50c-c230a30ad637', u'_context_strategy': u'keystone', u'_context_is_admin': False, u'_context_project_id': u'2', u'_\
context_timestamp': u'2012-01-11T19:41:58.186031', u'_context_user_id': u'demo', u'method': u'attach_volume', u'_context_remote_\
address': u'172.18.0.146'} from (pid=8603) __call__ /opt/stack/nova/nova/rpc/impl_kombu.py:629
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,268 INFO nova.api.openstack.wsgi [229ab0dc-3085-4f4c-bb80-d8104c96bef8 demo 2] htt\
p://172.18.0.146:8774/v1.1/2/servers/3d37d21e-8fba-46a4-a0e1-d137f1d17597/os-volume_attachments returned with HTTP 200
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,268 DEBUG nova.rpc [-] unpacked context: {'user_id': u'demo', 'roles': [u'Member',\
 u'sysadmin', u'netadmin'], 'timestamp': u'2012-01-11T19:41:58.186031', 'auth_token': u'07302338-e85e-49cf-a50c-c230a30ad637', '\
msg_id': None, 'remote_address': u'172.18.0.146', 'strategy': u'keystone', 'is_admin': False, 'request_id': u'229ab0dc-3085-4f4c\
-bb80-d8104c96bef8', 'project_id': u'2', 'read_deleted': u'no'} from (pid=8603) _unpack_context /opt/stack/nova/nova/rpc/impl_ko\
mbu.py:675
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,269 INFO nova.compute.manager [229ab0dc-3085-4f4c-bb80-d8104c96bef8 demo 2] check_\
instance_lock: decorating: |<function attach_volume at 0x3393ed8>|
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,269 INFO nova.compute.manager [229ab0dc-3085-4f4c-bb80-d8104c96bef8 demo 2] check_\
instance_lock: arguments: |<nova.compute.manager.ComputeManager object at 0x31a96d0>| |<nova.rpc.impl_kombu.RpcContext object at\
 0x4a51fd0>| |3d37d21e-8fba-46a4-a0e1-d137f1d17597|
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,270 DEBUG nova.compute.manager [229ab0dc-3085-4f4c-bb80-d8104c96bef8 demo 2] insta\
nce 3d37d21e-8fba-46a4-a0e1-d137f1d17597: getting locked state from (pid=8603) get_lock /opt/stack/nova/nova/compute/manager.py:\
1426
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,320 INFO nova.compute.manager [229ab0dc-3085-4f4c-bb80-d8104c96bef8 demo 2] check_\
instance_lock: locked: |False|
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,320 INFO nova.compute.manager [229ab0dc-3085-4f4c-bb80-d8104c96bef8 demo 2] check_\
instance_lock: admin: |False|
Jan ...

Revision history for this message
David Kranz (david-kranz) wrote :

After some more experimenting it seems that after attempting to mount an already existing virtual disk the volumes get totally messed up. Trying to delete the volume fails and a list shows "error_deleting" as the state of the volume. I had to reboot the machine and rerun stack.sh to get back to working.

Thierry Carrez (ttx)
Changed in nova:
importance: Undecided → High
status: New → Confirmed
Revision history for this message
Brian Waldon (bcwaldon) wrote :

David, is this a duplicate of https://bugs.launchpad.net/nova/+bug/884635?

Revision history for this message
David Kranz (david-kranz) wrote :

I'm not sure. I think that bug is saying that volume-list claimed that a volume was not attached but it actually was. The grammar of that bug report is a little off so I can't quite tell.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

Ok, I can reproduce this. The other bug report has a different traceback which I believe may have been fixed.

Revision history for this message
Brian Waldon (bcwaldon) wrote :

The real problem here is that we need to leave the volume in a good state and communicate to the user what happened. Currently, this will prevent us from being able to delete the volume.

Changed in nova:
assignee: nobody → Brian Waldon (bcwaldon)
Brian Waldon (bcwaldon)
Changed in nova:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/4611

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/4611
Committed: http://github.com/openstack/nova/commit/44067ba758b378c4c2c2ff88b0d1b7a3c27ac812
Submitter: Jenkins
Branch: master

commit 44067ba758b378c4c2c2ff88b0d1b7a3c27ac812
Author: Brian Waldon <email address hidden>
Date: Mon Feb 27 17:37:57 2012 -0800

    Call detach_volume when attach fails

    * Fixes bug 914974
    * Raise exception.DeviceBusy when volume cannot attach

    Change-Id: Ie18377ba6acd6226612c70fa209185cc579c2d85

Changed in nova:
status: In Progress → Fix Committed
Brian Waldon (bcwaldon)
Changed in nova:
milestone: none → essex-4
Thierry Carrez (ttx)
Changed in nova:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: essex-4 → 2012.1
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.