[Ceph] Failed to delete a volume with its associated snapshot

Bug #1677525 reported by Liron Kuchlani
16
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Cinder
In Progress
Undecided
Rajat Dhasmana

Bug Description

Description of problem:
I have created a volume and snapshot. I have tried to delete the volume with its snapshot by
using 'cascade' argument. And indeed, both volume and snapshot were deleted.
But when I have repeated on this scenario while there is another volume from
that snapshot, the deletion process failed.
The deletion failure happens only in ceph driver.

Version-Release number of selected component (if applicable):
python-cinder-9.1.1-3.el7ost.noarch
openstack-cinder-9.1.1-3.el7ost.noarch
python-cinderclient-1.9.0-4.el7ost.noarch
puppet-cinder-9.4.1-3.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:

1. Create a volume.

[stack@undercloud-0 ~]$ cinder create 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-03-30T08:10:20.000000 |
| description | None |
| encrypted | False |
| id | a0d485d5-f893-43f5-8a40-e1e762482c11 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | None |
| os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 7f554817cae44d38a697954b360ed350 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2017-03-30T08:10:20.000000 |
| user_id | 59007ad16aca4714888a74a3b63dde57 |
| volume_type | None |
+--------------------------------+--------------------------------------+

2. Create a snapshot from that volume.

[stack@undercloud-0 ~]$ cinder snapshot-create a0d485d5-f893-43f5-8a40-e1e762482c11
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| created_at | 2017-03-30T08:10:31.767236 |
| description | None |
| id | ffe07fae-ba40-48a5-b14c-3c5c51894ff2 |
| metadata | {} |
| name | None |
| size | 1 |
| status | creating |
| updated_at | None |
| volume_id | a0d485d5-f893-43f5-8a40-e1e762482c11 |
+-------------+--------------------------------------+

3. Create a volume from that snapshot.

[stack@undercloud-0 ~]$ cinder create --snapshot-id ffe07fae-ba40-48a5-b14c-3c5c51894ff2
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2017-03-30T08:10:58.000000 |
| description | None |
| encrypted | False |
| id | 7156d8fd-7434-48d4-a926-b64bef121f27 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | None |
| os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 7f554817cae44d38a697954b360ed350 |
| replication_status | disabled |
| size | 1 |
| snapshot_id | ffe07fae-ba40-48a5-b14c-3c5c51894ff2 |
| source_volid | None |
| status | available |
| updated_at | 2017-03-30T08:10:59.000000 |
| user_id | 59007ad16aca4714888a74a3b63dde57 |
| volume_type | None |
+--------------------------------+--------------------------------------+

[stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 7156d8fd-7434-48d4-a926-b64bef121f27 | available | - | 1 | - | false | |
| a0d485d5-f893-43f5-8a40-e1e762482c11 | available | - | 1 | - | false | |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

[stack@undercloud-0 ~]$ cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+------+------+
| ID | Volume ID | Status | Name | Size |
+--------------------------------------+--------------------------------------+-----------+------+------+
| ffe07fae-ba40-48a5-b14c-3c5c51894ff2 | a0d485d5-f893-43f5-8a40-e1e762482c11 | available | - | 1 |
+--------------------------------------+--------------------------------------+-----------+------+------+

4. Try delete the volume with its associated snapshot.

[stack@undercloud-0 ~]$ cinder delete a0d485d5-f893-43f5-8a40-e1e762482c11 --cascade
Request to delete volume a0d485d5-f893-43f5-8a40-e1e762482c11 has been accepted.

[stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 7156d8fd-7434-48d4-a926-b64bef121f27 | available | - | 1 | - | false | |
| a0d485d5-f893-43f5-8a40-e1e762482c11 | available | - | 1 | - | false | |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

[stack@undercloud-0 ~]$ cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+------+------+
| ID | Volume ID | Status | Name | Size |
+--------------------------------------+--------------------------------------+-----------+------+------+
| ffe07fae-ba40-48a5-b14c-3c5c51894ff2 | a0d485d5-f893-43f5-8a40-e1e762482c11 | available | - | 1 |
+--------------------------------------+--------------------------------------+-----------+------+------+

Actual results:
Failed to delete a volume with its associated snapshot when there is another volume from that snapshot.

Expected results:
Both volume and snapshot should be deleted when there is another volume from that snapshot.

Additional info:

from /cinder/volume.log:
2017-03-30 08:50:23.178 7901 ERROR cinder.volume.manager [req-4e083fa9-ce06-4881-8f0f-df06f9f3f267 59007ad16aca4714888a74a3b63dde57 7f554817cae44d38a697954b360ed350 - default default] Delete snapshot failed, due to snapshot busy.
2017-03-30 08:50:23.244 7901 ERROR cinder.volume.manager [req-4e083fa9-ce06-4881-8f0f-df06f9f3f267 59007ad16aca4714888a74a3b63dde57 7f554817cae44d38a697954b360ed350 - default default] Unable to delete busy volume.

Tags: ceph drivers
Revision history for this message
Liron Kuchlani (lkuchlan) wrote :
Eric Harney (eharney)
tags: added: ceph drivers
wangxiyuan (wangxiyuan)
Changed in cinder:
status: New → Confirmed
assignee: nobody → wangxiyuan (wangxiyuan)
Revision history for this message
wangxiyuan (wangxiyuan) wrote :

The reason is that rbd use Linked Clone by default. So if you set "rbd_flatten_volume_from_snapshot=True" (which is False by default) in the config file. This bug will not happen again.

I'm not sure whether we should fix it now.

Maybe we should force flatten the volumes here if users want to delete a snapshot when there are some volumes created from that snapshot?

Revision history for this message
wangxiyuan (wangxiyuan) wrote :

I think https://review.openstack.org/#/c/432326/ can solve the problem.

Changed in cinder:
assignee: wangxiyuan (wangxiyuan) → nobody
Changed in cinder:
assignee: nobody → Rajat Dhasmana (whoami-rajat)
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on cinder (master)

Change abandoned by "Rajat Dhasmana <email address hidden>" on branch: master
Review: https://review.opendev.org/c/openstack/cinder/+/754397
Reason: decided to not go with this logic

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.