VM on hostb cannot access dnsmasq on hosta to get ip with gre tunnel by OVSQuantumTunnelAgent

Bug #996874 reported by yong sheng gong
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Invalid
Undecided
Unassigned

Bug Description

I found the br-tun has the flows set:
[root@robinlinux ~]# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000b2dea5b34a44
n_tables:255, n_buffers:256
features: capabilities:0xc7, actions:0xfff
 1(patch-int): addr:aa:c9:ac:8c:be:a8
     config: 0
     state: 0
 2(gre-0): addr:76:4c:ca:ca:61:1c
     config: 0
     state: 0
 3(gre-1): addr:52:91:f7:6c:08:57
     config: 0
     state: 0
 LOCAL(br-tun): addr:b2:de:a5:b3:4a:44
     config: PORT_DOWN
     state: LINK_DOWN
OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0
[root@robinlinux ~]# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=66.985s, table=0, n_packets=0, n_bytes=0, priority=3,tun_id=0x2 actions=mod_vlan_vid:1,output:1
 cookie=0x0, duration=69.593s, table=0, n_packets=2, n_bytes=652, priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x2,NORMAL
 cookie=0x0, duration=1884.901s, table=0, n_packets=449, n_bytes=145796, priority=1 actions=drop

It seems we have just flows for connection between br-int and br-tun, have no flows out/in for gre port. So all traffic on gre port are dropped by default rule 3#.

I can make it work by:
1. removing gre port's in_key and out_key options
2. changing default drop to normal rule

Thanks

Revision history for this message
dan wendlandt (danwent) wrote :

What you show here seems correct. I would not make either of the changes you suggest, as they will break things.

It would be best if you could show me the "dump-flows' and "show" output for both "br-tun" and "br-int" on both hosts, and explain what two taps devices should be able to communicate, but can't.

Also, can you confirm that you are running the absolute latest from master? There was a recent commit that added db.commit() that can cause stale db data that results in wiring issues.

Also posting logs from the agents can be helpful.

Revision history for this message
yong sheng gong (gongysh) wrote :

My configuration is just like:
redhat openvswitch 1.4.0
ubuntu openvswitch 1.20

I found If I add a flow on the side of redhat, it will work:
ovs-ofctl add-flow br-tun priority=2,in_port=2,actions=normal

Note: port 2 is the gre port to unbuntu.

strange stuff is : This rule does not work on ubuntu side. I mean I don't need to add a such rule on ubuntu side, and even if I add a rule such as on ubuntu:
ovs-ofctl add-flow br-tun priority=2,in_port=2,actions=drop, it will not block the gre traffic.

After that: I decide to upgrade my openvswitch to latest one 1.4.1 on ubuntu side. This way, all the default settings by our agents and quantum server are enough and correct.

And one more issue for our agent:
ovs-ofctl del-flow br-tun priority=1,xx=yy will not work. According to ovs-ofctl man page, we need --strict if priority keyword is in match.

Revision history for this message
dan wendlandt (danwent) wrote : Re: [Bug 996874] Re: VM on hostb cannot access dnsmasq on hosta to get ip with gre tunnel by OVSQuantumTunnelAgent

On Sat, May 12, 2012 at 7:01 PM, yong sheng gong <email address hidden> wrote:

> My configuration is just like:
> redhat openvswitch 1.4.0
>

Did you install this yourself, or are you using the built-in OVS in the
Linux kernel (this is the default). If the latter, then this won't work,
as the in-kernel version of OVS does not have tunnel support.

> And one more issue for our agent:
> ovs-ofctl del-flow br-tun priority=1,xx=yy will not work. According to
> ovs-ofctl man page, we need --strict if priority keyword is in match.
>

Interesting. Did you run into this in practice? I don't see a call to
delete_flows() that actually specifies a value for priority, but it
certainly seems worth fixing as one could be added in the future. Want to
file a bug and submit a patch?

Revision history for this message
dan wendlandt (danwent) wrote :

Yong, seems like you're no longer running into this issue? I'm closing it, but we can re-open if its still alive.

Changed in quantum:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.