Reservations not decremented when instances are destroyed
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
High
|
Sean Dague |
Bug Description
Easily reproduceable by running (on a fresh devstack install) the following Tempest test:
jpipes@
Pass really long metadata while creating a server ... SKIP: Until Bug 1004007 is fixed
Create a server with name length exceeding 256 characters ... ok
Create a server with an unknown flavor ... ok
Create a server with an unknown image ... ok
Pass invalid network uuid while creating a server ... ok
Pass a non existant keypair while creating a server ... ok
Delete a server that belongs to another tenant ... ok
Delete a non existent server ... ok
Pass a server ID that exceeds length limit to delete server ... ok
Pass an invalid string parameter to delete server ... ok
An access IPv4 address must match a valid address pattern ... ok
An access IPv6 address must match a valid address pattern ... ok
Use an unencoded file when creating a server with personality ... ok
Reboot a deleted server ... ok
Rebuild a deleted server ... ok
Create a server with name parameter empty ... ok
Update name of a non-existent server ... ok
Update name of server exceed the name length limit ... ok
Update name of a server that belongs to another tenant ... ok
Update name of the server to an empty string ... ok
-------
Ran 21 tests in 73.527s
OK (SKIP=1)
During the test run (which creates a *single* server, shared in the test case):
mysql> select * from quota_usages;
+------
| created_at | updated_at | deleted_at | deleted | id | project_id | resource | in_use | reserved | until_refresh |
+------
| 2012-06-05 17:08:44 | 2012-06-05 17:09:16 | NULL | 0 | 1 | 280196a9863341f
| 2012-06-05 17:08:44 | 2012-06-05 17:09:16 | NULL | 0 | 2 | 280196a9863341f
| 2012-06-05 17:08:44 | 2012-06-05 17:09:16 | NULL | 0 | 3 | 280196a9863341f
+------
3 rows in set (0.04 sec)
After the tests complete and the shared server instance is destroyed:
mysql> select * from quota_usages;
+------
| created_at | updated_at | deleted_at | deleted | id | project_id | resource | in_use | reserved | until_refresh |
+------
| 2012-06-05 17:08:44 | 2012-06-05 17:09:58 | NULL | 0 | 1 | 280196a9863341f
| 2012-06-05 17:08:44 | 2012-06-05 17:09:58 | NULL | 0 | 2 | 280196a9863341f
| 2012-06-05 17:08:44 | 2012-06-05 17:09:58 | NULL | 0 | 3 | 280196a9863341f
+------
3 rows in set (0.00 sec)
As you can see, the cores and instances reservations are not being cleaned up... BUT, the RAM is!
summary: |
- Reservations not destroyed when instances are destroyed + Reservations not decremented when instances are destroyed |
Changed in nova: | |
importance: | Undecided → High |
status: | New → Confirmed |
Changed in nova: | |
assignee: | nobody → Sean Dague (sdague-b) |
tags: | added: folsom-backport-potential |
Actually, the ram reservation is not being cleaned up either, which is not surprising. Thanks for looking into this!
Let's see…it looks like in_use is updated, but reserved isn't. My best guess is that the reservations are not being committed like they're supposed to be, but that the usage has a refresh triggered. We should trace the reservations array through the RPC mechanism; they should ultimately be passed to QUOTAS.commit() immediately after creating the instance record in the database…