Nova fails to correctly update the quota when deleting lots of VMs, some of which previously failed due to a quota error

Bug #1668267 reported by Satya Sanjibani Routray
16
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
In Progress
High
Pushkar Umaranikar

Bug Description

When VMs goes to ERROR state after the block device mapping nova updates the quota for adding the instance/CPU/RAM to the quota
but when we delete those VMs nova forgets the release/update the CPU/RAM/Instance quota

Steps to reproduce:
1. Set the quota limit for volumes to 10 and VMs to 50
2. create 20 VMs by using option boot from volume from horizon
Observation:
10 VMs comes to active state and 10 goes to ERROR state (As expected)

3. delete all VMs

Expectation: CPU/RAM/Instance used quota will come to "0"
Actual: CPU/RAM/Instance quota is not coming to "0"
CPU used quota shows 10 (As the VM is created from M1.SMALL flavor)
Instance quota shows 10
RAM quota shows 20480MB

which is quite unexpected

Its seems it is a blocker for Ocata release

I am running from the stable/Ocata

Tags: quotas
Revision history for this message
John Garbutt (johngarbutt) wrote :

Thats expected, you have to delete the instance to remove the quota usage.

That is by design, as its very possible for an instance that is in the ERROR state to be running and user hypervisor resources, IPs, volumes, etc.

You must delete to remove the quota usage.

I don't believe thats a recent change. Did you see different behaviour on a previous release?

tags: added: quotas
Changed in nova:
status: New → Incomplete
Revision history for this message
Satya Sanjibani Routray (satroutr) wrote : Re: [Bug 1668267] Re: Nova ignores to update the quota for VMs in error

Please read the step 3

Where it mentioned delete all vms

On 27-Feb-2017 7:41 PM, "John Garbutt" <email address hidden> wrote:

> Thats expected, you have to delete the instance to remove the quota
> usage.
>
> That is by design, as its very possible for an instance that is in the
> ERROR state to be running and user hypervisor resources, IPs, volumes,
> etc.
>
> You must delete to remove the quota usage.
>
> I don't believe thats a recent change. Did you see different behaviour
> on a previous release?
>
> ** Tags added: quotas
>
> ** Changed in: nova
> Status: New => Incomplete
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1668267
>
> Title:
> Nova ignores to update the quota for VMs in error
>
> Status in OpenStack Compute (nova):
> Incomplete
>
> Bug description:
> When VMs goes to ERROR state after the block device mapping nova updates
> the quota for adding the instance/CPU/RAM to the quota
> but when we delete those VMs nova forgets the release/update the
> CPU/RAM/Instance quota
>
> Steps to reproduce:
> 1. Set the quota limit for volumes to 10 and VMs to 50
> 2. create 20 VMs by using option boot from volume from horizon
> Observation:
> 10 VMs comes to active state and 10 goes to ERROR state (As expected)
>
> 3. delete all VMs
>
> Expectation: CPU/RAM/Instance used quota will come to "0"
> Actual: CPU/RAM/Instance quota is not coming to "0"
> CPU used quota shows 10 (As the VM is created from M1.SMALL flavor)
> Instance quota shows 10
> RAM quota shows 20480MB
>
> which is quite unexpected
>
> Its seems it is a blocker for Ocata release
>
> I am running from the stable/Ocata
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/nova/+bug/1668267/+subscriptions
>

Changed in nova:
status: Incomplete → New
Changed in nova:
status: New → Confirmed
Revision history for this message
John Garbutt (johngarbutt) wrote : Re: Nova ignores to update the quota for VMs in error

OK, if you delete them, we should correct the title of the bug.

What state are those deleted instances in now? Are they still in the ERROR state?

Revision history for this message
Satya Sanjibani Routray (satroutr) wrote :

nova list
/usr/lib/python2.7/site-packages/novaclient/client.py:278: UserWarning: The 'tenant_id' argument is deprecated in Ocata and its use may result in errors in future releases. As 'project_id' is provided, the 'tenant_id' argument will be ignored.
  warnings.warn(msg)
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

summary: - Nova ignores to update the quota for VMs in error
+ Nova fails to correctly update the quota when deleting lots of VMs, some
+ of which previously failed due to a quota error
Revision history for this message
Satya Sanjibani Routray (satroutr) wrote :

while deleting the VM

select all 20 VMs from horizon and click on delete from horizon

Revision history for this message
Satya Sanjibani Routray (satroutr) wrote :

to create boot from volume from horizon

Revision history for this message
Satya Sanjibani Routray (satroutr) wrote :

No VMs present after deletion but quota shows 10 VMs present

Revision history for this message
John Garbutt (johngarbutt) wrote :

Can we retry this with 15 instances instead of 20, just to double check whats going on.

It would be great if we could get logs from the system where the deletes occurred (nova-api and nova-compute logs), to make sure there are no obvious exceptions, etc, that might help give a hint about what is going on here.

Changed in nova:
importance: Undecided → High
Revision history for this message
Satya Sanjibani Routray (satroutr) wrote :

Sure working on it

will be able to give you the details in 1hr

Changed in nova:
assignee: nobody → Sarafraj Singh (sarafraj-singh)
Changed in nova:
assignee: Sarafraj Singh (sarafraj-singh) → Pushkar Umaranikar (pushkar-umaranikar)
Revision history for this message
Maciej Szankin (mszankin) wrote :

Confirmed on my setup as well.
Set instances quota to 10, volumes to 5.
Spawned 7 VMs with volumes attached - 2 of them failed as expected.
Used resources reported 5/10 VMs and 5/5 volumes.
Deleted all instances and the used resources are still the same.

Changed in nova:
status: Confirmed → In Progress
Revision history for this message
Satya Sanjibani Routray (satroutr) wrote :
Revision history for this message
Satya Sanjibani Routray (satroutr) wrote :

any update on this?

Revision history for this message
Satya Sanjibani Routray (satroutr) wrote :

tested the fix https://review.openstack.org/#/c/437222

still having same issue

Revision history for this message
Matt Riedemann (mriedem) wrote :

For master (pike) this is the fix:

https://review.openstack.org/#/c/443403/

For ocata, the fix will be a backport of this:

https://review.openstack.org/#/c/443395/

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.