failed to start VM in multi-node environment

Bug #1643461 reported by Yuli
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
DragonFlow
Fix Released
Critical
Yuli

Bug Description

Hello

Due to some errors in selective distribution I decided to disable this feautee and continue running the tests to start VMs.

I am running 2 nodes environment.
I executed the following commands:

cd ~/devstack
source ./openrc admin demo
nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec --nic net-name=private --security-group default test2

After some time I am getting an error in "nova list" command:
+--------------------------------------+-------+--------+------------+-------------+-------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------+--------+------------+-------------+-------------------------------------------------------+
| 318dc00b-09a6-4758-9193-fa078cdb23ce | test2 | ERROR | - | NOSTATE | |

When checking the staatus of VM i see an error message:
{"message": "Build of instance 318dc00b-09a6-4758-9193-fa078cdb23ce aborted: Failed to allocate the network(s), not rescheduling.", "code": 500, "details": " File \"/opt/stack/nova/nova/compute/manager.py\", line 1779, in _do_build_and_run_instance

Revision history for this message
Yuli (stremovsky) wrote :

After additional testing I see that status is always DOWN for the VM port.

Changed in dragonflow:
importance: Undecided → Critical
assignee: nobody → Yuli (stremovsky)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to dragonflow (master)

Fix proposed to branch: master
Review: https://review.openstack.org/400102

Changed in dragonflow:
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on dragonflow (master)

Change abandoned by Yuli (<email address hidden>) on branch: master
Review: https://review.openstack.org/400102
Reason: A better solution exists for this problem. See Dima submitted fix for that: https://review.openstack.org/#/c/400334/

Revision history for this message
Yuli (stremovsky) wrote :

After additional investigation I found that zmq was enabled
instead of redis on multi-node installation.

That was a real reason of this bug.

Dima submitted fix for that: https://review.openstack.org/#/c/400334/

Li Ma (nick-ma-z)
Changed in dragonflow:
status: In Progress → Fix Committed
Yuli (stremovsky)
Changed in dragonflow:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.