Creating server did not fail when exceeded Quantum quota limit

Bug #1192287 reported by terryg2012
26
This bug affects 5 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Expired
Undecided
Unassigned

Bug Description

I using the released Grizzly bits 2013.1.1 with one controller node, one quantum node, and 4 compute nodes.

I encountered the issue where the Quantum port quota was exceeded, but the nova create instance did not return error condition for status. It appear to be bouncing the request to the 4 compute node where 3 of the nodes responded with quantum quota exceeded. Shouldn't the create request fail immediately on the first failure? Or from the log file, shouldn't it fail after 3 attempt?

Portion of my test log showing create instances:

Date Time Test server name task status
========== ======= ========================== ========= =====
[2013-06-17][13:07:09][A:0,E:0] Testing server testserver-20130617-130707-25 - networking, BUILD
[2013-06-17][13:07:15][A:0,E:0] Testing server testserver-20130617-130707-25 - None, ACTIVE
[2013-06-17][13:07:16][A:0,E:0] Waiting for server testserver-20130617-130715-26 with ID f6fb6968-82e7-4b36-87cc-c18eba189f71 to get spawned
[2013-06-17][13:07:18][A:0,E:0] Testing server testserver-20130617-130715-26 - networking, BUILD
[2013-06-17][13:07:24][A:0,E:0] Testing server testserver-20130617-130715-26 - None, ACTIVE
[2013-06-17][13:07:25][A:0,E:0] Waiting for server testserver-20130617-130724-27 with ID 4ed9a635-6fc4-4d4b-9a64-8dfd4574fddd to get spawned
[2013-06-17][13:07:26][A:0,E:0] Testing server testserver-20130617-130724-27 - networking, BUILD
[2013-06-17][13:07:32][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:07:37][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:07:43][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:07:49][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:07:55][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:01][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:07][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:13][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:19][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:25][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:31][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:36][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:42][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:48][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:08:54][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD
[2013-06-17][13:09:00][A:0,E:0] Testing server testserver-20130617-130724-27 - scheduling, BUILD

nova-scheduler.log
2013-06-17 13:07:26.287 ERROR nova.scheduler.filter_scheduler [req-816e9730-fe41-493a-93c5-ca79237705eb 6891ad89906646e4934ae6014eb07521 29ca96af08984e9288ec3f98a743197a] [instance: 4ed9a635-6fc4-4d4b-9a64-8dfd4574fddd] Error from last host: cld5b3 (node cld5b3.casl.adapps.hp.com): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 831, in _run_instance\n requested_networks, macs, security_groups)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1075, in _allocate_network\n instance=instance)\n', u' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071, in _allocate_network\n security_groups=security_groups)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 46, in wrapper\n res = f(self, context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 281, in allocate_for_instance\n \'exception\': ex})\n', u' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 264, in allocate_for_instance\n quantum.create_port(port_req_body)[\'port\'][\'id\'])\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 107, in with_params\n ret = self.function(instance, *args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 269, in create_port\n return self.post(self.ports_path, body=body)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 987, in post\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 912, in do_request\n self._handle_fault_response(status_code, replybody)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 893, in _handle_fault_response\n exception_handler_v20(status_code, des_error_body)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 80, in exception_handler_v20\n message=error_dict)\n', u"QuantumClientException: Quota exceeded for resources: ['port']\n"]
2013-06-17 13:07:28.584 ERROR nova.scheduler.filter_scheduler [req-816e9730-fe41-493a-93c5-ca79237705eb 6891ad89906646e4934ae6014eb07521 29ca96af08984e9288ec3f98a743197a] [instance: 4ed9a635-6fc4-4d4b-9a64-8dfd4574fddd] Error from last host: cld5b5 (node cld5b5.casl.adapps.hp.com): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 831, in _run_instance\n requested_networks, macs, security_groups)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1075, in _allocate_network\n instance=instance)\n', u' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071, in _allocate_network\n security_groups=security_groups)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 46, in wrapper\n res = f(self, context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 281, in allocate_for_instance\n \'exception\': ex})\n', u' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 264, in allocate_for_instance\n quantum.create_port(port_req_body)[\'port\'][\'id\'])\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 107, in with_params\n ret = self.function(instance, *args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 269, in create_port\n return self.post(self.ports_path, body=body)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 987, in post\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 912, in do_request\n self._handle_fault_response(status_code, replybody)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 893, in _handle_fault_response\n exception_handler_v20(status_code, des_error_body)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 80, in exception_handler_v20\n message=error_dict)\n', u"QuantumClientException: Quota exceeded for resources: ['port']\n"]
2013-06-17 13:07:30.398 ERROR nova.scheduler.filter_scheduler [req-816e9730-fe41-493a-93c5-ca79237705eb 6891ad89906646e4934ae6014eb07521 29ca96af08984e9288ec3f98a743197a] [instance: 4ed9a635-6fc4-4d4b-9a64-8dfd4574fddd] Error from last host: cld5b4 (node cld5b4.casl.adapps.hp.com): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 831, in _run_instance\n requested_networks, macs, security_groups)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1075, in _allocate_network\n instance=instance)\n', u' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071, in _allocate_network\n security_groups=security_groups)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 46, in wrapper\n res = f(self, context, *args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 281, in allocate_for_instance\n \'exception\': ex})\n', u' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', u' File "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 264, in allocate_for_instance\n quantum.create_port(port_req_body)[\'port\'][\'id\'])\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 107, in with_params\n ret = self.function(instance, *args, **kwargs)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 269, in create_port\n return self.post(self.ports_path, body=body)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 987, in post\n headers=headers, params=params)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 912, in do_request\n self._handle_fault_response(status_code, replybody)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 893, in _handle_fault_response\n exception_handler_v20(status_code, des_error_body)\n', u' File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 80, in exception_handler_v20\n message=error_dict)\n', u"QuantumClientException: Quota exceeded for resources: ['port']\n"]
2013-06-17 13:07:30.399 WARNING nova.scheduler.manager [req-816e9730-fe41-493a-93c5-ca79237705eb 6891ad89906646e4934ae6014eb07521 29ca96af08984e9288ec3f98a743197a] Failed to schedule_run_instance: No valid host was found. Exceeded max scheduling attempts 3 for instance 4ed9a635-6fc4-4d4b-9a64-8dfd4574fddd

tags: added: network
Revision history for this message
li,chen (chen-li) wrote :

I have met the same issue.
Instance stalled at BUILD status for ever.

Changed in nova:
status: New → Confirmed
Revision history for this message
Tiantian Gao (gtt116) wrote :

Seems like relative to this bug: https://bugs.launchpad.net/nova/+bug/1161661

Thang Pham (thang-pham)
Changed in nova:
assignee: nobody → Thang Pham (thang-pham)
Thang Pham (thang-pham)
Changed in nova:
assignee: Thang Pham (thang-pham) → nobody
Qiu Yu (unicell)
Changed in nova:
assignee: nobody → Qiu Yu (unicell)
Revision history for this message
Joe Gordon (jogo) wrote :

It sounds like the issue here is that we shouldn't reschedule on a quota failure in neutron.

Brent Eagles (beagles)
tags: added: neutron
Revision history for this message
haruka tanizawa (h-tanizawa) wrote :

Hi Qiu !
If you don't have any activity about this bug, I recommend that you take off your assign name.

Joe Gordon (jogo)
tags: added: quotas
Claudiu Belu (cbelu)
Changed in nova:
assignee: Qiu Yu (unicell) → nobody
Revision history for this message
Markus Zoeller (markus_z) (mzoeller) wrote : Cleanup EOL bug report

This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: <RELEASE_NAME>"
  Only still supported release names are valid (LIBERTY, MITAKA, OCATA, NEWTON).
  Valid example: CONFIRMED FOR: LIBERTY

Changed in nova:
status: Confirmed → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.