Cannot ping or ssh instance when network manager is FlatManager

Bug #746909 reported by guanxiaohua2k6

This bug report was converted into a question: question #160954: Cannot ping or ssh instance when network manager is FlatManager.

6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

I had asked the same question when I was using Bexar version. But I didn't get it solved. I tried the newest version of cactus, but I failed too. Could anyone help me? I will paste version info and related logs below.

BTW, I installed all in one machine.

---------------------------------------------------------------
Version of nova-compute: 2011.2~bzr930-0ubuntu0ppa1~maverick1
---------------------------------------------------------------

---------------------------------------------------------------------
euca-describe-instances:

RESERVATION r-80iakhmt IRT default
INSTANCE i-00000001 ami-43878e18 10.0.0.2 10.0.0.2 running mykey (IRT, ubuntu7) 0 m1.tiny 2011-04-01T02:05:19Z nova
---------------------------------------------------------------------

-----------------------------------------------------------------------
nova.conf:

--sql_connection=mysql://root:nova@ubuntu7/nova
--s3_host=ubuntu7
--rabbit_host=192.168.32.202
--cc_host=192.168.32.202
--ec2_url=http://192.168.32.202:8773/services/Cloud

--daemonize=1

--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge

--FAKE_subdomain=ec2

--ca_path=/var/lib/nova/CA
--keys_path=/var/lib/nova/keys
--networks_path=/var/lib/nova/networks
--instances_path=/var/lib/nova/instances
--images_path=/var/lib/nova/images
--buckets_path=/var/lib/nova/buckets

--libvirt_type=kvm

--network_manager=nova.network.manager.FlatManager

--vlan_interface=eth0

--logdir=/var/log/nova
--verbose
--volume_group=ubuntu7
--fixed_range=192.168.2.64/26
--network_size=64
--lock_path=/var/lib/nova/tmp
---------------------------------------------------------------------

---------------------------------------------------------------------
nova-compute.log:

2011-04-01 11:02:07,566 DEBUG nova.utils [-] backend <module 'nova.db.sqlalchemy.api' from '/usr/lib/pymodules/python2.6/nova/db/sqlalchemy/api.pyc'> from (pid=6662) __get_backe
nd /usr/lib/pymodules/python2.6/nova/utils.py:427
2011-04-01 11:02:07,653 DEBUG nova.virt.libvirt_conn [-] Connecting to libvirt: qemu:///system from (pid=6662) _get_connection /usr/lib/pymodules/python2.6/nova/virt/libvirt_con
n.py:248
2011-04-01 11:02:08,136 INFO nova.virt.libvirt_conn [-] Compute_service record created for ubuntu7
2011-04-01 11:02:08,139 DEBUG nova.rpc [-] Initing the Adapter Consumer for compute from (pid=6662) __init__ /usr/lib/pymodules/python2.6/nova/rpc.py:148
2011-04-01 11:02:08,186 DEBUG nova.rpc [-] Initing the Adapter Consumer for compute.ubuntu7 from (pid=6662) __init__ /usr/lib/pymodules/python2.6/nova/rpc.py:148
2011-04-01 11:02:08,226 INFO nova.rpc [-] Created 'compute_fanout' fanout exchange with 'compute' routing key
2011-04-01 11:02:08,226 DEBUG nova.rpc [-] Initing the Adapter Consumer for compute from (pid=6662) __init__ /usr/lib/pymodules/python2.6/nova/rpc.py:148
2011-04-01 11:05:19,994 DEBUG nova.rpc [-] received {'_context_request_id': 'FM0PYJP-UO1VRGMHOLGD', '_context_read_deleted': False, 'args': {'instance_id': 1, 'injected_files':
None, 'availability_zone': None}, '_context_is_admin': True, '_context_timestamp': '2011-04-01T02:05:19Z', '_context_user': 'anne', 'method': 'run_instance', '_context_project':
 'IRT', '_context_remote_address': '192.168.32.202'} from (pid=6662) _receive /usr/lib/pymodules/python2.6/nova/rpc.py:167
2011-04-01 11:05:19,994 DEBUG nova.rpc [-] unpacked context: {'timestamp': '2011-04-01T02:05:19Z', 'remote_address': '192.168.32.202', 'project': 'IRT', 'is_admin': True, 'user'
: 'anne', 'request_id': 'FM0PYJP-UO1VRGMHOLGD', 'read_deleted': False} from (pid=6662) _unpack_context /usr/lib/pymodules/python2.6/nova/rpc.py:331
2011-04-01 11:05:20,050 AUDIT nova.compute.manager [FM0PYJP-UO1VRGMHOLGD anne IRT] instance 1: starting...
2011-04-01 11:05:20,195 DEBUG nova.rpc [-] Making asynchronous call on network ... from (pid=6662) call /usr/lib/pymodules/python2.6/nova/rpc.py:350
2011-04-01 11:05:20,196 DEBUG nova.rpc [-] MSG_ID is 6437cf6b126f47438423453fe573a783 from (pid=6662) call /usr/lib/pymodules/python2.6/nova/rpc.py:353
2011-04-01 11:05:20,518 DEBUG nova.rpc [-] Making asynchronous call on network.ubuntu7 ... from (pid=6662) call /usr/lib/pymodules/python2.6/nova/rpc.py:350
2011-04-01 11:05:20,518 DEBUG nova.rpc [-] MSG_ID is 5992d3c83f894779a4f61429bd94a8d2 from (pid=6662) call /usr/lib/pymodules/python2.6/nova/rpc.py:353
2011-04-01 11:05:20,911 DEBUG nova.virt.libvirt_conn [-] instance instance-00000001: starting toXML method from (pid=6662) to_xml /usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:941
2011-04-01 11:05:21,033 DEBUG nova.virt.libvirt_conn [-] instance instance-00000001: finished toXML method from (pid=6662) to_xml /usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:985
2011-04-01 11:05:21,084 INFO nova [-] called setup_basic_filtering in nwfilter
2011-04-01 11:05:21,084 INFO nova [-] ensuring static filters
2011-04-01 11:05:21,187 INFO nova [-] <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x4757390>
2011-04-01 11:05:21,188 INFO nova [-] <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x47575d0>
2011-04-01 11:05:21,194 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=6662) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-04-01 11:05:21,194 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=6662) inner /usr/lib/pymodules/python2.6/nova/utils.py:599
2011-04-01 11:05:21,200 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:21,262 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:21,276 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:21,296 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:21,339 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /var/lib/nova/instances/instance-00000001/ from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:21,361 INFO nova.virt.libvirt_conn [-] instance instance-00000001: Creating image
2011-04-01 11:05:21,432 DEBUG nova.utils [-] Attempting to grab semaphore "59a07b31" for method "call_if_not_exists"... from (pid=6662) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-04-01 11:05:21,450 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/59a07b31 /var/lib/nova/instances/instance-00000001/kernel from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:21,487 DEBUG nova.utils [-] Attempting to grab semaphore "456ef2cb" for method "call_if_not_exists"... from (pid=6662) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-04-01 11:05:21,506 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/456ef2cb /var/lib/nova/instances/instance-00000001/ramdisk from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:21,546 DEBUG nova.utils [-] Attempting to grab semaphore "43878e18_sm" for method "call_if_not_exists"... from (pid=6662) inner /usr/lib/pymodules/python2.6/nova/utils.py:594
2011-04-01 11:05:21,598 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/43878e18_sm /var/lib/nova/instances/instance-00000001/disk from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:21,902 INFO nova.virt.libvirt_conn [-] instance instance-00000001: injecting key into image 1132957208
2011-04-01 11:05:21,914 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-00000001/disk from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:22,944 DEBUG nova.utils [-] Running cmd (subprocess): sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:24,556 DEBUG nova.utils [-] Running cmd (subprocess): sudo mount /dev/nbd15 /tmp/tmpWcAFXb from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:24,617 DEBUG nova.utils [-] Running cmd (subprocess): sudo mkdir -p /tmp/tmpWcAFXb/root/.ssh from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:24,627 DEBUG nova.utils [-] Running cmd (subprocess): sudo chown root /tmp/tmpWcAFXb/root/.ssh from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:24,644 DEBUG nova.utils [-] Running cmd (subprocess): sudo chmod 700 /tmp/tmpWcAFXb/root/.ssh from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:24,657 DEBUG nova.utils [-] Running cmd (subprocess): sudo tee -a /tmp/tmpWcAFXb/root/.ssh/authorized_keys from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:24,682 DEBUG nova.utils [-] Running cmd (subprocess): sudo umount /dev/nbd15 from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:25,084 DEBUG nova.utils [-] Running cmd (subprocess): rmdir /tmp/tmpWcAFXb from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:25,121 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=6662) execute /usr/lib/pymodules/python2.6/nova/utils.py:150
2011-04-01 11:05:27,122 DEBUG nova.virt.libvirt_conn [-] instance instance-00000001: is running from (pid=6662) spawn /usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:570
2011-04-01 11:05:27,315 DEBUG nova.virt.libvirt_conn [-] instance instance-00000001: booted from (pid=6662) _wait_for_boot /usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:581
-----------------------------------------------------------------------------------------

-----------------------------------------------------------------------------------------
log of euca-get-output:

ttylinux 12.1
 > http:\/\/ttylinux.org\/
 > hostname: ttylinux_host

load Kernel Module: acpiphp [ OK ]
load Kernel Module: e1000 [ OK ]
load Kernel Module: ne2k-pci [ OK ]
[ 0.981490] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
load Kernel Module: 8139cp [ OK ]
load Kernel Module: pcnet32 [ OK ]
load Kernel Module: mii [ OK ]
load Kernel Module: ip_tables [ OK ]
file systems checked [ OK ]
mounting local file systems [ OK ]
setting up system clock [utc] Fri Apr 1 02:05:31 UTC 2011 [ OK ]
stty: \/dev\/console
stty: \/dev\/console
initializing random number generator [WATING].. [ OK ]
stty: \/dev\/console
startup klogd [ OK ]
startup syslogd [ OK ]
stty: \/dev\/console
stty: \/dev\/console
bringing up loopback interface lo [ OK ]
stty: \/dev\/console
udhcpc (v1.17.2) started
Sending discover...
Sending select for 192.168.32.191...
Lease of 192.168.32.191 obtained, lease time 86400
starting DHCP forEthernet interface eth0 [ OK ]
cloud-setup: checking http:\/\/169.254.169.254\/2009-04-04\/meta-data\/instance-id
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 1\/30: up 2.05. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 2\/30: up 6.08. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 3\/30: up 9.09. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 4\/30: up 13.12. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 5\/30: up 16.13. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 6\/30: up 20.16. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 7\/30: up 23.16. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 8\/30: up 27.19. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 9\/30: up 30.20. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 10\/30: up 34.23. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 11\/30: up 37.24. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 12\/30: up 41.26. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 13\/30: up 44.27. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 14\/30: up 48.30. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 15\/30: up 51.31. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 16\/30: up 55.34. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 17\/30: up 58.35. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 18\/30: up 62.38. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 19\/30: up 65.39. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 20\/30: up 69.42. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 21\/30: up 72.43. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 22\/30: up 76.46. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 23\/30: up 79.47. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 24\/30: up 83.50. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 25\/30: up 86.51. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 26\/30: up 90.54. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 27\/30: up 93.55. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 28\/30: up 97.58. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 29\/30: up 100.59. request failed
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-setup: failed 30\/30: up 104.62. request failed
cloud-setup: after 30 fails, debugging
cloud-setup: running debug (30 tries reached)
############ debug start ##############
### \/etc\/rc.d\/init.d\/sshd start
stty: \/dev\/console
generating DSS host key [WATING].. [ OK ]
generating RSA host key [WATING].. [ OK ]
startup dropbear [ OK ]
### ifconfig -a
eth0 Link encap:Ethernet HWaddr 02:16:3E:23:50:01
          inet addr:192.168.32.191 Bcast:192.168.32.255 Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:32 errors:0 dropped:0 overruns:0 frame:0
          TX packets:98 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2532 (2.4 KiB) TX bytes:4900 (4.7 KiB)
          Interrupt:11 Base address:0xc000

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          UP LOOPBACK RUNNING MTU:16436 Metric:1
          RX packets:60 errors:0 dropped:0 overruns:0 frame:0
          TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:5280 (5.1 KiB) TX bytes:5280 (5.1 KiB)

### route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.32.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
0.0.0.0 192.168.32.254 0.0.0.0 UG 0 0 0 eth0
### cat \/etc\/resolv.conf
cat: can't open '\/etc\/resolv.conf': No such file or directory
### ping -c 5 192.168.32.254
PING 192.168.32.254 (192.168.32.254): 56 data bytes

--- 192.168.32.254 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
\/etc\/rc.d\/init.d\/cloud-functions: line 41: \/etc\/resolv.conf: No such file or directory
### pinging nameservers
### uname -a
Linux ttylinux_host 2.6.35-22-virtual #35-Ubuntu SMP Sat Oct 16 23:19:29 UTC 2010 x86_64 GNU\/Linux
### lsmod
Module Size Used by
ip_tables 18737 0
x_tables 24391 1 ip_tables
pcnet32 36585 0
8139cp 20333 0
mii 5261 2 pcnet32,8139cp
ne2k_pci 7802 0
8390 9897 1 ne2k_pci
e1000 110274 0
acpiphp 18752 0
### dmesg | tail
<6>[ 0.969912] ne2k-pci.c:v1.03 9\/22\/2003 D. Becker\/P. Gortmaker
<6>[ 0.981222] 8139cp: 8139cp: 10\/100 PCI Ethernet driver v1.3 (Mar 22, 2004)
<4>[ 0.981490] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
<6>[ 0.982525] 8139cp 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11
<6>[ 0.986202] 8139cp 0000:00:03.0: eth0: RTL-8139C+ at 0xffffc900001dc000, 02:16:3e:23:50:01, IRQ 11
<7>[ 0.986240] 8139cp 0000:00:03.0: setting latency timer to 64
<6>[ 0.995496] pcnet32: pcnet32.c:v1.35 21.Apr.2008 <email address hidden>
<6>[ 1.014356] ip_tables: (C) 2000-2006 Netfilter Core Team
<6>[ 1.495942] eth0: link up, 100Mbps, full-duplex, lpa 0x05E1
<7>[ 12.190079] eth0: no IPv6 routers present
### tail -n 25 \/var\/log\/messages
Apr 1 02:05:31 ttylinux_host syslog.info syslogd started: BusyBox v1.17.2
Apr 1 02:05:31 ttylinux_host user.info kernel: [ 1.495942] eth0: link up, 100Mbps, full-duplex, lpa 0x05E1
Apr 1 02:05:42 ttylinux_host user.debug kernel: [ 12.190079] eth0: no IPv6 routers present
Apr 1 02:07:17 ttylinux_host authpriv.info dropbear[267]: Running in background
############ debug end ##############
cloud-setup: failed to read iid from metadata. tried 30
stty: \/dev\/console
sshd is already running.
stty: \/dev\/console
startup inetd [ OK ]
stty: \/dev\/console
startup crond [ OK ]
wget: can't connect to remote host (169.254.169.254): No route to host
cloud-userdata: failed to read instance id
===== cloud-final: system completely up in 125.36 seconds ====
wget: can't connect to remote host (169.254.169.254): No route to host
wget: can't connect to remote host (169.254.169.254): No route to host
wget: can't connect to remote host (169.254.169.254): No route to host
  instance-id:
  public-ipv4:
  local-ipv4 :
=> First-Boot Sequence:
setting shared object cache [running ldconfig] [ OK ]
-----------------------------------------------------------------------------------------

----------------------------------------------------------------------------------------
# iptables-save:

# Generated by iptables-save v1.4.4 on Fri Apr 1 11:11:14 2011
*nat
:PREROUTING ACCEPT [236:77554]
:OUTPUT ACCEPT [9:540]
:POSTROUTING ACCEPT [11:1424]
:nova-compute-OUTPUT - [0:0]
:nova-compute-POSTROUTING - [0:0]
:nova-compute-PREROUTING - [0:0]
:nova-compute-floating-snat - [0:0]
:nova-compute-snat - [0:0]
:nova-network-OUTPUT - [0:0]
:nova-network-POSTROUTING - [0:0]
:nova-network-PREROUTING - [0:0]
:nova-network-floating-snat - [0:0]
:nova-network-snat - [0:0]
:nova-postrouting-bottom - [0:0]
-A PREROUTING -j nova-compute-PREROUTING
-A PREROUTING -j nova-network-PREROUTING
-A OUTPUT -j nova-compute-OUTPUT
-A OUTPUT -j nova-network-OUTPUT
-A POSTROUTING -j nova-compute-POSTROUTING
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE
-A POSTROUTING -j nova-network-POSTROUTING
-A POSTROUTING -j nova-postrouting-bottom
-A nova-compute-snat -j nova-compute-floating-snat
-A nova-network-POSTROUTING -s 10.0.0.0/8 -d 10.128.0.0/24 -j ACCEPT
-A nova-network-POSTROUTING -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
-A nova-network-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.32.202:8773
-A nova-network-snat -j nova-network-floating-snat
-A nova-network-snat -s 10.0.0.0/8 -j SNAT --to-source 192.168.32.202
-A nova-postrouting-bottom -j nova-compute-snat
-A nova-postrouting-bottom -j nova-network-snat
COMMIT
# Completed on Fri Apr 1 11:11:14 2011
# Generated by iptables-save v1.4.4 on Fri Apr 1 11:11:14 2011
*filter
:INPUT ACCEPT [127695:10670335]
:FORWARD ACCEPT [4:1776]
:OUTPUT ACCEPT [127613:10823851]
:nova-compute-FORWARD - [0:0]
:nova-compute-INPUT - [0:0]
:nova-compute-OUTPUT - [0:0]
:nova-compute-inst-1 - [0:0]
:nova-compute-local - [0:0]
:nova-compute-sg-fallback - [0:0]
:nova-filter-top - [0:0]
:nova-network-FORWARD - [0:0]
:nova-network-INPUT - [0:0]
:nova-network-OUTPUT - [0:0]
:nova-network-local - [0:0]
-A INPUT -j nova-compute-INPUT
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A INPUT -j nova-network-INPUT
-A FORWARD -j nova-filter-top
-A FORWARD -j nova-compute-FORWARD
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -j nova-network-FORWARD
-A OUTPUT -j nova-filter-top
-A OUTPUT -j nova-compute-OUTPUT
-A OUTPUT -j nova-network-OUTPUT
-A nova-compute-inst-1 -m state --state INVALID -j DROP
-A nova-compute-inst-1 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A nova-compute-inst-1 -s 10.0.0.1/32 -p udp -m udp --sport 67 --dport 68 -j ACCEPT
-A nova-compute-inst-1 -s 10.0.0.0/28 -j ACCEPT
-A nova-compute-inst-1 -p icmp -j ACCEPT
-A nova-compute-inst-1 -p tcp -m tcp --dport 22 -j ACCEPT
-A nova-compute-inst-1 -j nova-compute-sg-fallback
-A nova-compute-local -d 10.0.0.2/32 -j nova-compute-inst-1
-A nova-compute-sg-fallback -j DROP
-A nova-filter-top -j nova-compute-local
-A nova-filter-top -j nova-network-local
COMMIT
# Completed on Fri Apr 1 11:11:14 2011
----------------------------------------------------------------------------

Revision history for this message
Thierry Carrez (ttx) wrote :

In FlatManager you need to explicitly route 169.254.169.254 to the API server, something like:
iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <NOVA-API-SERVER-IP>:8773
Did you do that ?

Changed in nova:
status: New → Incomplete
Revision history for this message
guanxiaohua2k6 (guanxiaohua2k6) wrote :

There is "-A nova-network-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.32.202:8773" in iptables, so I think I have done it.

Revision history for this message
Thierry Carrez (ttx) wrote :

Might be a duplicate of bug 719798

Revision history for this message
guanxiaohua2k6 (guanxiaohua2k6) wrote :

In addition, I got the image from here "wget http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz".

I have tried with the version in ppa:nova-core/release, and it failed too. Could anybody help me?

Changed in nova:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Related questions

Remote bug watches

Bug watches keep track of this bug in other bug trackers.