[Ceph] Failed to delete a volume with its associated snapshot
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
In Progress
|
Undecided
|
Rajat Dhasmana |
Bug Description
Description of problem:
I have created a volume and snapshot. I have tried to delete the volume with its snapshot by
using 'cascade' argument. And indeed, both volume and snapshot were deleted.
But when I have repeated on this scenario while there is another volume from
that snapshot, the deletion process failed.
The deletion failure happens only in ceph driver.
Version-Release number of selected component (if applicable):
python-
openstack-
python-
puppet-
How reproducible:
100%
Steps to Reproduce:
1. Create a volume.
[stack@undercloud-0 ~]$ cinder create 1
+------
| Property | Value |
+------
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-03-
| description | None |
| encrypted | False |
| id | a0d485d5-
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | None |
| os-vol-
| os-vol-
| os-vol-
| os-vol-
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2017-03-
| user_id | 59007ad16aca471
| volume_type | None |
+------
2. Create a snapshot from that volume.
[stack@undercloud-0 ~]$ cinder snapshot-create a0d485d5-
+------
| Property | Value |
+------
| created_at | 2017-03-
| description | None |
| id | ffe07fae-
| metadata | {} |
| name | None |
| size | 1 |
| status | creating |
| updated_at | None |
| volume_id | a0d485d5-
+------
3. Create a volume from that snapshot.
[stack@undercloud-0 ~]$ cinder create --snapshot-id ffe07fae-
+------
| Property | Value |
+------
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-03-
| description | None |
| encrypted | False |
| id | 7156d8fd-
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | None |
| os-vol-
| os-vol-
| os-vol-
| os-vol-
| replication_status | disabled |
| size | 1 |
| snapshot_id | ffe07fae-
| source_volid | None |
| status | available |
| updated_at | 2017-03-
| user_id | 59007ad16aca471
| volume_type | None |
+------
[stack@undercloud-0 ~]$ cinder list
+------
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+------
| 7156d8fd-
| a0d485d5-
+------
[stack@undercloud-0 ~]$ cinder snapshot-list
+------
| ID | Volume ID | Status | Name | Size |
+------
| ffe07fae-
+------
4. Try delete the volume with its associated snapshot.
[stack@undercloud-0 ~]$ cinder delete a0d485d5-
Request to delete volume a0d485d5-
[stack@undercloud-0 ~]$ cinder list
+------
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+------
| 7156d8fd-
| a0d485d5-
+------
[stack@undercloud-0 ~]$ cinder snapshot-list
+------
| ID | Volume ID | Status | Name | Size |
+------
| ffe07fae-
+------
Actual results:
Failed to delete a volume with its associated snapshot when there is another volume from that snapshot.
Expected results:
Both volume and snapshot should be deleted when there is another volume from that snapshot.
Additional info:
from /cinder/volume.log:
2017-03-30 08:50:23.178 7901 ERROR cinder.
2017-03-30 08:50:23.244 7901 ERROR cinder.
tags: | added: ceph drivers |
Changed in cinder: | |
status: | New → Confirmed |
assignee: | nobody → wangxiyuan (wangxiyuan) |
Changed in cinder: | |
assignee: | nobody → Rajat Dhasmana (whoami-rajat) |
status: | Confirmed → In Progress |
The reason is that rbd use Linked Clone by default. So if you set "rbd_flatten_ volume_ from_snapshot= True" (which is False by default) in the config file. This bug will not happen again.
I'm not sure whether we should fix it now.
Maybe we should force flatten the volumes here if users want to delete a snapshot when there are some volumes created from that snapshot?