Comment 20 for bug 1864822

Revision history for this message
James Denton (james-denton) wrote :

Yes, this is OVS+DVR w/ openstack-ansible (fairly recent master). Neutron 16.0.0.0b2.dev61.

I simulated failure again by restarting openvswitch-switch. Here is an ongoing ping from the qdhcp namespace for a network mapped to VLAN 6 out br-ex (172.23.208.1 being the gateway, a physical firewall):

root@aio1:~# ip netns exec qdhcp-04a47906-d8f2-4600-b1da-c5f15b89ed19 ping 172.23.208.1
PING 172.23.208.1 (172.23.208.1) 56(84) bytes of data.
64 bytes from 172.23.208.1: icmp_seq=1 ttl=255 time=1.49 ms
64 bytes from 172.23.208.1: icmp_seq=2 ttl=255 time=0.576 ms
64 bytes from 172.23.208.1: icmp_seq=3 ttl=255 time=0.511 ms
64 bytes from 172.23.208.1: icmp_seq=4 ttl=255 time=0.495 ms
64 bytes from 172.23.208.1: icmp_seq=5 ttl=255 time=0.608 ms
64 bytes from 172.23.208.1: icmp_seq=6 ttl=255 time=0.502 ms
64 bytes from 172.23.208.1: icmp_seq=7 ttl=255 time=0.496 ms
64 bytes from 172.23.208.1: icmp_seq=8 ttl=255 time=0.494 ms
64 bytes from 172.23.208.1: icmp_seq=9 ttl=255 time=0.486 ms
64 bytes from 172.23.208.1: icmp_seq=10 ttl=255 time=0.551 ms
64 bytes from 172.23.208.1: icmp_seq=11 ttl=255 time=0.513 ms
64 bytes from 172.23.208.1: icmp_seq=12 ttl=255 time=0.516 ms
^C
--- 172.23.208.1 ping statistics ---
32 packets transmitted, 12 received, 62% packet loss, time 31708ms
rtt min/avg/max/mdev = 0.486/0.603/1.495/0.272 ms

The packet loss began when I restarted OVS. You can see the drop flow w/ packets matched (dropped outbound ICMP):

Every 1.0s: ovs-ofctl dump-flows br-ex aio1: Thu Apr 9 14:37:05 2020

NXST_FLOW reply (xid=0x4):
 cookie=0x16c893e7b2dab29c, duration=110.848s, table=0, n_packets=16, n_bytes=1568, idle_age=95, priority=2,in_port=1 actions=drop
 cookie=0x16c893e7b2dab29c, duration=110.852s, table=0, n_packets=865, n_bytes=63243, idle_age=0, priority=0 actions=NORMAL
 cookie=0x16c893e7b2dab29c, duration=110.746s, table=2, n_packets=0, n_bytes=0, idle_age=110, priority=4,in_port=1,dl_vlan=6 actions=mod_vlan_vid:1041,NORMAL
 cookie=0x16c893e7b2dab29c, duration=110.574s, table=2, n_packets=0, n_bytes=0, idle_age=110, priority=4,in_port=1,dl_vlan=5 actions=mod_vlan_vid:6,NORMAL
 cookie=0x16c893e7b2dab29c, duration=110.568s, table=2, n_packets=0, n_bytes=0, idle_age=110, priority=4,in_port=1,dl_vlan=3 actions=mod_vlan_vid:4,NORMAL
 cookie=0x16c893e7b2dab29c, duration=110.562s, table=2, n_packets=0, n_bytes=0, idle_age=110, priority=4,in_port=1,dl_vlan=4 actions=mod_vlan_vid:1087,NORMAL

Note: This behavior is also seen on computes. I only have an AIO at the moment. On computes, the connection to between agent and ovsdb is lost which seems to trigger this, rather than a forced restart of openvswitch.