in multi_host nova-network mode, nova-network doesn't reassociate reassociate the floating ips on reboot

Bug #827807 reported by Do Dinh Thang
20
This bug affects 4 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Medium
Vish Ishaya

Bug Description

I setup & run nova-network in multi_host mode.
when I restart service nova-network on Node1, i must rerun "euca-associate-address" to associate floating IP to instance on Node1.

Related branches

Revision history for this message
Vish Ishaya (vishvananda) wrote :

Haven't had a chance to reproduce this yet. But this is an important fix so I'm targeting it.

Changed in nova:
status: New → Triaged
importance: Undecided → Medium
milestone: none → diablo-rbp
Thierry Carrez (ttx)
Changed in nova:
assignee: nobody → Vish Ishaya (vishvananda)
Thierry Carrez (ttx)
Changed in nova:
milestone: diablo-rbp → 2011.3
Revision history for this message
Eric Dodemont (dodeeric) wrote :

I just have re-installed Nova on two nodes with the latest trunk version. And I experience the same bug.

a) I associate floating IPs to my running instances: OK

b) I restart nova-network: NOK

==> "euca-describe-address" and "euca-describe-instances" still show the floating IPs as associated to the instances;
==> "iptables -nL -t nat" does not show anymore the NAT rules for the floating IPs.

Work-arround: de-associate then re-associate the floating IPs.

Nova version: 2011.3~rc~20110909.1541-0ubuntu0ppa1~natty1
OS version: Ubuntu server Natty (11.04) 64 bit

Revision history for this message
Eric Dodemont (dodeeric) wrote :

Example:

- Floating IP = 192.168.1.240
- Fixed IP = 10.0.3.8 (i-00000006)

---

root@node1:~# euca-describe-addresses
ADDRESS 192.168.1.240 i-00000006 (project-toc)

root@node1:~# iptables -nL -t nat | grep 192.168.1.240
DNAT all -- 0.0.0.0/0 192.168.1.240 to:10.0.3.8
DNAT all -- 0.0.0.0/0 192.168.1.240 to:10.0.3.8
SNAT all -- 10.0.3.8 0.0.0.0/0 to:192.168.1.240

root@node1:~# iptables-save -t nat > /root/nat.rules

---

root@node1:~# stop nova-network
nova-network stop/waiting

root@node1:~# iptables -nL -t nat | grep 240
DNAT all -- 0.0.0.0/0 192.168.1.240 to:10.0.3.8
DNAT all -- 0.0.0.0/0 192.168.1.240 to:10.0.3.8
SNAT all -- 10.0.3.8 0.0.0.0/0 to:192.168.1.240

root@node1:~# start nova-network
nova-network start/running, process 5314

root@node1:~# iptables -nL -t nat | grep 192.168.1.240
==> Floating IP NAT rules no more there!

root@node1:~# ip addr | grep 192.168.1.240
net 192.168.1.240/32 scope global eth0 ==> Floating IP still configured on the interface

root@node1:~# euca-describe-addresses
ADDRESS 192.168.1.240 i-00000006 (project-toc) ==> Floating IP still associated in the DB

---

root@node1:~# iptables-restore < /root/nat.rules

root@node1:~# iptables -nL -t nat | grep 192.168.1.240
DNAT all -- 0.0.0.0/0 192.168.1.240 to:10.0.3.8
DNAT all -- 0.0.0.0/0 192.168.1.240 to:10.0.3.8
SNAT all -- 10.0.3.8 0.0.0.0/0 to:192.168.1.240

Revision history for this message
Vladimir Popovski (vladimir.p) wrote :

We are seeing the same in one of our configs without multi-host. Just regular FlatDHCP. Have not checked on other configs.

Revision history for this message
Vish Ishaya (vishvananda) wrote :

can you guys verify if the linked branch:
https://code.launchpad.net/~vishvananda/nova/fix-floating-reboot

solves your problem? Please note that only new associated ips will work. You should be able to fix the old ones by manually updating the host in the floating_ips table to be the hostname of the host where the natting rules should be appearing.

Revision history for this message
Eric Dodemont (dodeeric) wrote :

Vish,

I just tested your branch (https://code.launchpad.net/~vishvananda/nova/fix-floating-reboot) and it solves indeed the problem.

Now, after restart of the nova-network service (or after reboot of the nova-network node), the floating IPs are correctly reconfigured in the iptables nat table.

Eric

Changed in nova:
status: Triaged → Fix Committed
Thierry Carrez (ttx)
Changed in nova:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.