I've tracked down the source of the problem deleting volumes that are
referenced by snapshots using nova.volume.ISCSIDriver.
When the driver detects that the volume is busy then the delete_volume()
method in the VolumeDriver class in nova/volume/driver.py raises a
VolumeIsBusy exception with a keyword arg. However, the VolumeIsBusy
exception is derived from Error in nova/exception.py and neither
VolumeIsBusy nor Error accept keyword parameters. This means that an
unexpected keyword exception is raised, which the driver doesn't expect so
it sets the status of the volume to 'error-deleting' and stops. See the log
fragment below.
Deleting volumes works if I change nova/exception.py so that the
exception VolumeIsBusy derives from NovaException instead of Error.
NovaException can handle keyword arguments, so VolumeDrive.delete_volume()
sees the exception is expects (VolumeIsBusy) and it can handle the
situation correctly.
Any opinions on whether this is the 'correct' way to fix this problem?
regards,
Ollie
2011-11-16 11:06:46,211 DEBUG nova.utils [-] Running cmd (subprocess): sudo lvdisplay --noheading -C -o Attr nova-volumes/volume-000000a4 from (pid=6234) execute /usr/lib/python2.7/dist-packages/nova/utils.py:165
2011-11-16 11:06:46,271 ERROR nova.volume.manager [-] driver raised exception <__init__() got an unexpected keyword argument 'volume_name'>
2011-11-16 11:06:46,296 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py", line 620, in _process_data
(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/nova/volume/manager.py", line 189, in delete_volume
(nova.rpc): TRACE: raise
(nova.rpc): TRACE: TypeError: exceptions must be old-style classes or derived from BaseException, not NoneType
(nova.rpc): TRACE:
# diff -c /usr/share/pyshared/nova/exception.py{,.backup}
*** /usr/share/pyshared/nova/exception.py 2011-11-16 11:46:31.000000000 +0000
--- /usr/share/pyshared/nova/exception.py.backup 2011-11-16 11:39:04.000000000 +0000
***************
*** 370,376 ****
message = _("Snapshot %(snapshot_id)s could not be found.")
! class VolumeIsBusy(NovaException):
message = _("deleting volume %(volume_name)s that has snapshot")
--- 370,376 ----
message = _("Snapshot %(snapshot_id)s could not be found.")
! class VolumeIsBusy(Error):
message = _("deleting volume %(volume_name)s that has snapshot")
-----Original Message-----
From: Leahy, Oliver
Sent: 15 November 2011 14:35
To: 'Bug 888649'
Subject: RE: [Bug 888649] [NEW] Snapshots left in undeletable state
On my installation I can delete a volume that is referenced by a
snapshot. I will try to understand why this is so and get back to
you.
Though as I said in my previous message the behavior of the LVM based
driver is different to the behavior I described in my original bug
report.
Regards,
Ollie
-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Isaku Yamahata
Sent: 14 November 2011 19:06
To: Leahy, Oliver
Subject: Re: [Bug 888649] [NEW] Snapshots left in undeletable state
This volume in LVM with snapshot shouldn't be deleted
with VolumeIsBusy exception.
(At least it's my intension of VolumeDriver::delete_volume() with lvdisplay)
Can you please track down why it can be deleted?
If this volume in LVM is deleted, the all of derived snapshots are
deleted at the same time. So the following lvremove for snapshot
would fail. It explains the below behavior.
Bug description:
If a volume is created using the euca api, then a snapshot is created from the volume then the volume is deleted
the snapshot cannot now be deleted. If the user tries
to delete the snapshot it ends up in the state 'error_deleting'
and remains in the system.
The following sequence of euca commands illustrates the problem
$ euca-create-volume -s 1 -z nova
VOLUME vol-0000007c 1 creating (bocktest, None, None, None) 2011-11-10T17:36:41Z
# euca-create-snapshot vol-0000007c
SNAPSHOT snap-00000018 vol-0000007c creating 2011-11-10T17:37:33Z 0%
# euca-delete-volume vol-0000007c# euca-delete-volume vol-0000007c
VOLUME vol-0000007c
# euca-delete-snapshot snap-00000018
SNAPSHOT snap-00000018
# euca-describe-snapshots
SNAPSHOT snap-00000018 vol-0000007c error_deleting 2011-11-10T17:37:33Z 100%
# euca-delete-snapshot snap-00000018
Traceback (most recent call last):
File "/usr/bin/euca-delete-snapshot", line 110, in <module>
main()
File "/usr/bin/euca-delete-snapshot", line 101, in main
return_code = euca_conn.delete_snapshot(snapshot_id)
File "/usr/lib/pymodules/python2.7/boto/ec2/connection.py", line 1112, in delete_snapshot
return self.get_status('DeleteSnapshot', params)
File "/usr/lib/pymodules/python2.7/boto/connection.py", line 648, in get_status
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0"?>
<Response><Errors><Error><Code>ApiError</Code><Message>Snapshot status must be available</Message></Error></Errors><RequestID>286ce49b-3c6d-4dcb-8130-52afe8b9ba94</RequestID></Response>
I've tracked down the source of the problem deleting volumes that are ISCSIDriver.
referenced by snapshots using nova.volume.
When the driver detects that the volume is busy then the delete_volume() driver. py raises a
method in the VolumeDriver class in nova/volume/
VolumeIsBusy exception with a keyword arg. However, the VolumeIsBusy
exception is derived from Error in nova/exception.py and neither
VolumeIsBusy nor Error accept keyword parameters. This means that an
unexpected keyword exception is raised, which the driver doesn't expect so
it sets the status of the volume to 'error-deleting' and stops. See the log
fragment below.
Deleting volumes works if I change nova/exception.py so that the delete_ volume( )
exception VolumeIsBusy derives from NovaException instead of Error.
NovaException can handle keyword arguments, so VolumeDrive.
sees the exception is expects (VolumeIsBusy) and it can handle the
situation correctly.
Any opinions on whether this is the 'correct' way to fix this problem?
regards,
Ollie
2011-11-16 11:06:46,211 DEBUG nova.utils [-] Running cmd (subprocess): sudo lvdisplay --noheading -C -o Attr nova-volumes/ volume- 000000a4 from (pid=6234) execute /usr/lib/ python2. 7/dist- packages/ nova/utils. py:165 python2. 7/dist- packages/ nova/rpc/ impl_kombu. py", line 620, in _process_data context= ctxt, **node_args) python2. 7/dist- packages/ nova/volume/ manager. py", line 189, in delete_volume
2011-11-16 11:06:46,271 ERROR nova.volume.manager [-] driver raised exception <__init__() got an unexpected keyword argument 'volume_name'>
2011-11-16 11:06:46,296 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE: File "/usr/lib/
(nova.rpc): TRACE: rval = node_func(
(nova.rpc): TRACE: File "/usr/lib/
(nova.rpc): TRACE: raise
(nova.rpc): TRACE: TypeError: exceptions must be old-style classes or derived from BaseException, not NoneType
(nova.rpc): TRACE:
# diff -c /usr/share/ pyshared/ nova/exception. py{,.backup} pyshared/ nova/exception. py 2011-11-16 11:46:31.000000000 +0000 pyshared/ nova/exception. py.backup 2011-11-16 11:39:04.000000000 +0000
*** /usr/share/
--- /usr/share/
***************
*** 370,376 ****
message = _("Snapshot %(snapshot_id)s could not be found.")
! class VolumeIsBusy( NovaException) :
message = _("deleting volume %(volume_name)s that has snapshot")
--- 370,376 ----
message = _("Snapshot %(snapshot_id)s could not be found.")
! class VolumeIsBusy( Error):
message = _("deleting volume %(volume_name)s that has snapshot")
-----Original Message-----
From: Leahy, Oliver
Sent: 15 November 2011 14:35
To: 'Bug 888649'
Subject: RE: [Bug 888649] [NEW] Snapshots left in undeletable state
On my installation I can delete a volume that is referenced by a
snapshot. I will try to understand why this is so and get back to
you.
Though as I said in my previous message the behavior of the LVM based
driver is different to the behavior I described in my original bug
report.
Regards,
Ollie
-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Isaku Yamahata
Sent: 14 November 2011 19:06
To: Leahy, Oliver
Subject: Re: [Bug 888649] [NEW] Snapshots left in undeletable state
On Thu, Nov 10, 2011 at 05:43:59PM -0000, Ollie Leahy wrote: 10T17:36: 41Z snapshot vol-0000007c 10T17:37: 33Z 0%
> $ euca-create-volume -s 1 -z nova
> VOLUME vol-0000007c 1 creating (bocktest, None, None, None) 2011-11-
> # euca-create-
> SNAPSHOT snap-00000018 vol-0000007c creating 2011-11-
> # euca-delete-volume vol-0000007c# euca-delete-volume vol-0000007c
> VOLUME vol-0000007c
This volume in LVM with snapshot shouldn't be deleted :delete_ volume( ) with lvdisplay)
with VolumeIsBusy exception.
(At least it's my intension of VolumeDriver:
Can you please track down why it can be deleted?
If this volume in LVM is deleted, the all of derived snapshots are
deleted at the same time. So the following lvremove for snapshot
would fail. It explains the below behavior.
thanks,
--
yamahata
-- /bugs.launchpad .net/bugs/ 888649
You received this bug notification because you are subscribed to the bug
report.
https:/
Title:
Snapshots left in undeletable state
Status in OpenStack Compute (Nova):
New
Bug description:
then a snapshot is created from the volume
then the volume is deleted
If a volume is created using the euca api,
the snapshot cannot now be deleted. If the user tries
to delete the snapshot it ends up in the state 'error_deleting'
and remains in the system.
The following sequence of euca commands illustrates the problem
$ euca-create-volume -s 1 -z nova 10T17:36: 41Z snapshot vol-0000007c 10T17:37: 33Z 0% snapshot snap-00000018 snapshots 10T17:37: 33Z 100% snapshot snap-00000018 euca-delete- snapshot" , line 110, in <module> euca-delete- snapshot" , line 101, in main delete_ snapshot( snapshot_ id) pymodules/ python2. 7/boto/ ec2/connection. py", line 1112, in delete_snapshot status( 'DeleteSnapshot ', params) pymodules/ python2. 7/boto/ connection. py", line 648, in get_status ror(response. status, response.reason, body) exception. EC2ResponseErro r: EC2ResponseError: 400 Bad Request <Errors> <Error> <Code>ApiError< /Code>< Message> Snapshot status must be available< /Message> </Error> </Errors> <RequestID> 286ce49b- 3c6d-4dcb- 8130-52afe8b9ba 94</RequestID> </Response>
VOLUME vol-0000007c 1 creating (bocktest, None, None, None) 2011-11-
# euca-create-
SNAPSHOT snap-00000018 vol-0000007c creating 2011-11-
# euca-delete-volume vol-0000007c# euca-delete-volume vol-0000007c
VOLUME vol-0000007c
# euca-delete-
SNAPSHOT snap-00000018
# euca-describe-
SNAPSHOT snap-00000018 vol-0000007c error_deleting 2011-11-
# euca-delete-
Traceback (most recent call last):
File "/usr/bin/
main()
File "/usr/bin/
return_code = euca_conn.
File "/usr/lib/
return self.get_
File "/usr/lib/
raise self.ResponseEr
boto.
<?xml version="1.0"?>
<Response>
To manage notifications about this bug go to: /bugs.launchpad .net/nova/ +bug/888649/ +subscriptions
https:/