diff -Nru neutron-2014.1.2/AUTHORS neutron-2014.1.3/AUTHORS --- neutron-2014.1.2/AUTHORS 2014-08-07 22:59:03.000000000 +0000 +++ neutron-2014.1.3/AUTHORS 2014-10-02 23:26:26.000000000 +0000 @@ -32,6 +32,8 @@ Baodong Li Ben Nemec Ben Nemec +Bernhard M. Wiedemann +Bertrand Lallau Bhuvan Arumugam Bob Kukura Bob Melander @@ -50,6 +52,7 @@ Clark Boylan Claudiu Belu Clint Byrum +Cédric Ollivier Dan Florea Dan Prince Dan Wendlandt @@ -60,6 +63,7 @@ Davanum Srinivas Dave Cahill Dave Lapsley +Dave Tucker David Ripton Dazhao Debo @@ -72,6 +76,7 @@ Ed Bak Edgar Magana Edgar Magana +Elena Ezhova Emilien Macchi EmilienM Eoghan Glynn @@ -140,6 +145,7 @@ John Dewey John Dunning John Jason Brzozowski +John Schwarz Jon Grimm Jonathan LaCour Jordan Tardif @@ -174,6 +180,7 @@ Luke Gorrie Madhav Puri Major Hayden +Marga Millet Mark McClain Mark McClain Mark McLoughlin @@ -190,6 +197,7 @@ Michael J Fork Michael Still Miguel Angel Ajo +Mike Kolesnik Mohammad Banikazemi Monty Taylor Morgan Fainberg @@ -268,6 +276,7 @@ Sridhar S Stephen Gran Stephen Ma +Steven Hillman Sudhakar Sudheendra Murthy Sumit Naiksatam @@ -291,9 +300,11 @@ Vishvananda Ishaya Wu Wenxiang Xiaolin Zhang +Xu Chen Xuhan Peng YAMAMOTO Takashi Yaguang Tang +Yang Yu Yang Yu YangLei Ying Liu diff -Nru neutron-2014.1.2/ChangeLog neutron-2014.1.3/ChangeLog --- neutron-2014.1.2/ChangeLog 2014-08-07 22:59:02.000000000 +0000 +++ neutron-2014.1.3/ChangeLog 2014-10-02 23:26:25.000000000 +0000 @@ -1,13 +1,56 @@ CHANGES ======= +2014.1.3 +-------- + +* Deletes floating ip related connection states +* Forbid regular users to reset admin-only attrs to default values +* Add delete operations for the ODL MechanismDriver +* Add missing ml2 plugin to migration 1fcfc149aca4 +* Don't convert numeric protocol values to int +* NSX: Optionally not enforce nat rule match length check +* Don't spawn metadata-proxy for non-isolated nets +* Big Switch: Check for 'id' in port before lookup +* use TRUE in SQL for boolean var +* call security_groups_member_updated in port_update +* Don't allow user to set firewall rule with port and no protocol +* BSN: Add context to backend request for debugging +* Improve ODL ML2 Exception Handling +* Send network name and uuid to subnet create +* BSN: Allow concurrent reads to consistency DB +* Big Switch: Retry on 503 errors from backend +* NSX: log request body to NSX as debug +* Fix metadata agent's auth info caching +* NSX: Correct allowed_address_pair return value on create_port +* Neutron should not use the neutronclient utils module for import_class +* Cisco N1kv plugin to send subtype on network profile creation +* Pass object to policy when finding fields to strip +* Call policy.init() once per API request +* Perform policy checks only once on list responses +* Datacenter moid should not be tuple +* Allow unsharing a network used as gateway/floatingip +* Add support for router scheduling in Cisco N1kv Plugin +* Fix func job hook script permission problems +* Add hook scripts for the functional infra job +* Fixes Hyper-V agent issue on Hyper-V 2008 R2 +* Fixes Hyper-V issue due to ML2 RPC versioning +* Ensure ip6tables are used only if ipv6 is enabled in kernel +* Remove explicit dependency on amqplib +* Clear entries in Cisco N1KV specific tables on rollback +* Bump stable/icehouse next version to 2014.1.3 +* Verify ML2 type driver exists before calling del + 2014.1.2 -------- +* Big Switch: Only update hash header on success +* Ignore variable column widths in ovsdb functional tests * Add dsvm-functional tox env to fix functional job * Fix deprecated opt in haproxy driver * Add configurable http_timeout parameter for Cisco N1K * Updated from global requirements +* VMWare: don't notify on disassociate_floatingips() * Avoid notifying while inside transaction opened in delete_port() * BSN: Remove db lock and add missing contexts * Set python hash seed to 0 in tox.ini diff -Nru neutron-2014.1.2/debian/changelog neutron-2014.1.3/debian/changelog --- neutron-2014.1.2/debian/changelog 2014-08-21 13:58:28.000000000 +0000 +++ neutron-2014.1.3/debian/changelog 2014-10-21 16:00:38.000000000 +0000 @@ -1,14 +1,56 @@ -neutron (1:2014.1.2-0ubuntu1.1) trusty-security; urgency=medium +neutron (1:2014.1.3-0ubuntu1.1) trusty-security; urgency=medium * No change rebuild for security: - - [0324965] remove token from notifier middleware - + CVE-2014-4615 - + LP: #1321080 - - [2c4828e] no quota for allowed address pair - + CVE-2014-3555 - + LP: #1336207 + - [dd4b77f] Forbid regular users to reset admin-only attrs to default values + + CVE-2014-6414 + + LP: #1357379 - -- Jamie Strandboge Thu, 21 Aug 2014 08:54:55 -0500 + -- Marc Deslauriers Tue, 21 Oct 2014 11:59:06 -0400 + +neutron (1:2014.1.3-0ubuntu1) trusty; urgency=medium + + [ Corey Bryant ] + * Resynchronize with stable/icehouse (4a0210e) (LP: #1377136): + - [3a30d19] Deletes floating ip related connection states + - [dd4b77f] Forbid regular users to reset admin-only attrs to default values + - [dc2c893] Add delete operations for the ODL MechanismDriver + - [b51e2c7] Add missing ml2 plugin to migration 1fcfc149aca4 + - [a17a500] Don't convert numeric protocol values to int + - [3a85946] NSX: Optionally not enforce nat rule match length check + - [645f984] Don't spawn metadata-proxy for non-isolated nets + - [b464d89] Big Switch: Check for 'id' in port before lookup + - [3116ffa] use TRUE in SQL for boolean var + - [3520e66] call security_groups_member_updated in port_update + - [50e1534] Don't allow user to set firewall rule with port and no protocol + - [0061533] BSN: Add context to backend request for debugging + - [6de6d61] Improve ODL ML2 Exception Handling + - [2a4153d] Send network name and uuid to subnet create + - [b5e3c9a] BSN: Allow concurrent reads to consistency DB + - [b201432] Big Switch: Retry on 503 errors from backend + - [f6c47ee] NSX: log request body to NSX as debug + - [97d622a] Fix metadata agent's auth info caching + - [255df45] NSX: Correct allowed_address_pair return value on create_port + - [5bea041] Neutron should not use the neutronclient utils module for import_class + - [d5314e2] Cisco N1kv plugin to send subtype on network profile creation + - [f32d1ce] Pass object to policy when finding fields to strip + - [8b5f6be] Call policy.init() once per API request + - [9a6d811] Perform policy checks only once on list responses + - [c48db90] Datacenter moid should not be tuple + - [161d465] Allow unsharing a network used as gateway/floatingip + - [9574a2f] Add support for router scheduling in Cisco N1kv Plugin + - [6f54565] Fix func job hook script permission problems + - [ea43103] Add hook scripts for the functional infra job + - [8161cb7] Fixes Hyper-V agent issue on Hyper-V 2008 R2 + - [8e99cfd] Fixes Hyper-V issue due to ML2 RPC versioning + - [69f9121] Ensure ip6tables are used only if ipv6 is enabled in kernel + - [399b809] Remove explicit dependency on amqplib + - [a872143] Clear entries in Cisco N1KV specific tables on rollback + - [ad82fad] Verify ML2 type driver exists before calling del + - [af2cc98] Big Switch: Only update hash header on success + - [b1e5eec] Ignore variable column widths in ovsdb functional tests + - [4a0210e] VMWare: don't notify on disassociate_floatingips() + + -- Chuck Short Mon, 06 Oct 2014 09:15:06 -0400 neutron (1:2014.1.2-0ubuntu1) trusty; urgency=medium diff -Nru neutron-2014.1.2/etc/neutron/rootwrap.d/l3.filters neutron-2014.1.3/etc/neutron/rootwrap.d/l3.filters --- neutron-2014.1.2/etc/neutron/rootwrap.d/l3.filters 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/etc/neutron/rootwrap.d/l3.filters 2014-10-02 23:25:23.000000000 +0000 @@ -39,3 +39,6 @@ iptables-restore: CommandFilter, iptables-restore, root ip6tables-save: CommandFilter, ip6tables-save, root ip6tables-restore: CommandFilter, ip6tables-restore, root + +# l3 agent to delete floatingip's conntrack state +conntrack: CommandFilter, conntrack, root diff -Nru neutron-2014.1.2/etc/policy.json neutron-2014.1.3/etc/policy.json --- neutron-2014.1.2/etc/policy.json 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/etc/policy.json 2014-10-02 23:25:23.000000000 +0000 @@ -47,7 +47,6 @@ "create_port:port_security_enabled": "rule:admin_or_network_owner", "create_port:binding:host_id": "rule:admin_only", "create_port:binding:profile": "rule:admin_only", - "create_port:binding:vnic_type": "rule:admin_or_owner", "create_port:mac_learning_enabled": "rule:admin_or_network_owner", "get_port": "rule:admin_or_owner", "get_port:queue_id": "rule:admin_only", @@ -55,13 +54,11 @@ "get_port:binding:vif_details": "rule:admin_only", "get_port:binding:host_id": "rule:admin_only", "get_port:binding:profile": "rule:admin_only", - "get_port:binding:vnic_type": "rule:admin_or_owner", "update_port": "rule:admin_or_owner", "update_port:fixed_ips": "rule:admin_or_network_owner", "update_port:port_security_enabled": "rule:admin_or_network_owner", "update_port:binding:host_id": "rule:admin_only", "update_port:binding:profile": "rule:admin_only", - "update_port:binding:vnic_type": "rule:admin_or_owner", "update_port:mac_learning_enabled": "rule:admin_or_network_owner", "delete_port": "rule:admin_or_owner", @@ -83,8 +80,6 @@ "create_firewall_rule": "", "get_firewall_rule": "rule:admin_or_owner or rule:shared_firewalls", - "create_firewall_rule:shared": "rule:admin_or_owner", - "get_firewall_rule:shared": "rule:admin_or_owner", "update_firewall_rule": "rule:admin_or_owner", "delete_firewall_rule": "rule:admin_or_owner", diff -Nru neutron-2014.1.2/neutron/agent/dhcp_agent.py neutron-2014.1.3/neutron/agent/dhcp_agent.py --- neutron-2014.1.2/neutron/agent/dhcp_agent.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/agent/dhcp_agent.py 2014-10-02 23:25:23.000000000 +0000 @@ -211,12 +211,15 @@ if not network.admin_state_up: return + enable_metadata = self.dhcp_driver_cls.should_enable_metadata( + self.conf, network) + for subnet in network.subnets: - if subnet.enable_dhcp: + if subnet.enable_dhcp and subnet.ip_version == 4: if self.call_driver('enable', network): - if (self.conf.use_namespaces and - self.conf.enable_isolated_metadata): + if self.conf.use_namespaces and enable_metadata: self.enable_isolated_metadata_proxy(network) + enable_metadata = False # Don't trigger twice self.cache.put(network) break @@ -226,6 +229,10 @@ if network: if (self.conf.use_namespaces and self.conf.enable_isolated_metadata): + # NOTE(jschwarz): In the case where a network is deleted, all + # the subnets and ports are deleted before this function is + # called, so checking if 'should_enable_metadata' is True + # for any subnet is false logic here. self.disable_isolated_metadata_proxy(network) if self.call_driver('disable', network): self.cache.remove(network) diff -Nru neutron-2014.1.2/neutron/agent/l3_agent.py neutron-2014.1.3/neutron/agent/l3_agent.py --- neutron-2014.1.2/neutron/agent/l3_agent.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/agent/l3_agent.py 2014-10-02 23:25:23.000000000 +0000 @@ -25,6 +25,7 @@ from neutron.agent.linux import ovs_lib # noqa from neutron.agent import rpc as agent_rpc from neutron.common import constants as l3_constants +from neutron.common import ipv6_utils from neutron.common import legacy from neutron.common import topics from neutron.common import utils as common_utils @@ -98,7 +99,8 @@ class RouterInfo(object): - def __init__(self, router_id, root_helper, use_namespaces, router): + def __init__(self, router_id, root_helper, use_namespaces, router, + use_ipv6=False): self.router_id = router_id self.ex_gw_port = None self._snat_enabled = None @@ -112,7 +114,7 @@ self.ns_name = NS_PREFIX + router_id if use_namespaces else None self.iptables_manager = iptables_manager.IptablesManager( root_helper=root_helper, - #FIXME(danwent): use_ipv6=True, + use_ipv6=use_ipv6, namespace=self.ns_name) self.routes = [] @@ -225,6 +227,7 @@ super(L3NATAgent, self).__init__(conf=self.conf) self.target_ex_net_id = None + self.use_ipv6 = ipv6_utils.is_enabled() def _check_config_params(self): """Check items in configuration files. @@ -338,7 +341,8 @@ def _router_added(self, router_id, router): ri = RouterInfo(router_id, self.root_helper, - self.conf.use_namespaces, router) + self.conf.use_namespaces, router, + use_ipv6=self.use_ipv6) self.router_info[router_id] = ri if self.conf.use_namespaces: self._create_router_namespace(ri) @@ -608,6 +612,10 @@ if ip_cidr.endswith(FLOATING_IP_CIDR_SUFFIX): net = netaddr.IPNetwork(ip_cidr) device.addr.delete(net.version, ip_cidr) + self.driver.delete_conntrack_state( + root_helper=self.root_helper, + namespace=ri.ns_name, + ip=ip_cidr) return fip_statuses def _get_ex_gw_port(self, ri): diff -Nru neutron-2014.1.2/neutron/agent/linux/dhcp.py neutron-2014.1.3/neutron/agent/linux/dhcp.py --- neutron-2014.1.2/neutron/agent/linux/dhcp.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/agent/linux/dhcp.py 2014-10-02 23:25:23.000000000 +0000 @@ -148,6 +148,16 @@ raise NotImplementedError + @classmethod + def get_isolated_subnets(cls, network): + """Returns a dict indicating whether or not a subnet is isolated.""" + raise NotImplementedError + + @classmethod + def should_enable_metadata(cls, conf, network): + """True if the metadata-proxy should be enabled for the network.""" + raise NotImplementedError + class DhcpLocalProcess(DhcpBase): PORTS = [] @@ -514,6 +524,7 @@ options = [] + isolated_subnets = self.get_isolated_subnets(self.network) dhcp_ips = collections.defaultdict(list) subnet_idx_map = {} for i, subnet in enumerate(self.network.subnets): @@ -538,7 +549,9 @@ # Add host routes for isolated network segments - if self._enable_metadata(subnet): + if (isolated_subnets[subnet.id] and + self.conf.enable_isolated_metadata and + subnet.ip_version == 4): subnet_dhcp_ip = subnet_to_interface_ip[subnet.id] host_routes.append( '%s/32,%s' % (METADATA_DEFAULT_IP, subnet_dhcp_ip) @@ -623,25 +636,36 @@ return ','.join((set_tag + tag, '%s' % option) + args) - def _enable_metadata(self, subnet): - '''Determine if the metadata route will be pushed to hosts on subnet. + @classmethod + def get_isolated_subnets(cls, network): + """Returns a dict indicating whether or not a subnet is isolated - If subnet has a Neutron router attached, we want the hosts to get - metadata from the router's proxy via their default route instead. - ''' - if self.conf.enable_isolated_metadata and subnet.ip_version == 4: - if subnet.gateway_ip is None: - return True - else: - for port in self.network.ports: - if port.device_owner == constants.DEVICE_OWNER_ROUTER_INTF: - for alloc in port.fixed_ips: - if alloc.subnet_id == subnet.id: - return False - return True - else: + A subnet is considered non-isolated if there is a port connected to + the subnet, and the port's ip address matches that of the subnet's + gateway. The port must be owned by a nuetron router. + """ + isolated_subnets = collections.defaultdict(lambda: True) + subnets = dict((subnet.id, subnet) for subnet in network.subnets) + + for port in network.ports: + if port.device_owner != constants.DEVICE_OWNER_ROUTER_INTF: + continue + for alloc in port.fixed_ips: + if subnets[alloc.subnet_id].gateway_ip == alloc.ip_address: + isolated_subnets[alloc.subnet_id] = False + + return isolated_subnets + + @classmethod + def should_enable_metadata(cls, conf, network): + """True if there exists a subnet for which a metadata proxy is needed + """ + if not conf.use_namespaces or not conf.enable_isolated_metadata: return False + isolated_subnets = cls.get_isolated_subnets(network) + return any(isolated_subnets[subnet.id] for subnet in network.subnets) + @classmethod def lease_update(cls): network_id = os.environ.get(cls.NEUTRON_NETWORK_ID_KEY) diff -Nru neutron-2014.1.2/neutron/agent/linux/interface.py neutron-2014.1.3/neutron/agent/linux/interface.py --- neutron-2014.1.2/neutron/agent/linux/interface.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/agent/linux/interface.py 2014-10-02 23:25:23.000000000 +0000 @@ -100,6 +100,46 @@ for ip_cidr, ip_version in previous.items(): if ip_cidr not in preserve_ips: device.addr.delete(ip_version, ip_cidr) + self.delete_conntrack_state(root_helper=self.root_helper, + namespace=namespace, + ip=ip_cidr) + + def delete_conntrack_state(self, root_helper, namespace, ip): + """Delete conntrack state associated with an IP address. + + This terminates any active connections through an IP. Call this soon + after removing the IP address from an interface so that new connections + cannot be created before the IP address is gone. + + root_helper: root_helper to gain root access to call conntrack + namespace: the name of the namespace where the IP has been configured + ip: the IP address for which state should be removed. This can be + passed as a string with or without /NN. A netaddr.IPAddress or + netaddr.Network representing the IP address can also be passed. + """ + ip_str = str(netaddr.IPNetwork(ip).ip) + ip_wrapper = ip_lib.IPWrapper(root_helper, namespace=namespace) + + # Delete conntrack state for ingress traffic + # If 0 flow entries have been deleted + # conntrack -D will return 1 + try: + ip_wrapper.netns.execute(["conntrack", "-D", "-d", ip_str], + check_exit_code=True, + extra_ok_codes=[1]) + + except RuntimeError: + LOG.exception(_("Failed deleting ingress connection state of" + " floatingip %s"), ip_str) + + # Delete conntrack state for egress traffic + try: + ip_wrapper.netns.execute(["conntrack", "-D", "-q", ip_str], + check_exit_code=True, + extra_ok_codes=[1]) + except RuntimeError: + LOG.exception(_("Failed deleting egress connection state of" + " floatingip %s"), ip_str) def check_bridge_exists(self, bridge): if not ip_lib.device_exists(bridge): diff -Nru neutron-2014.1.2/neutron/agent/linux/ip_lib.py neutron-2014.1.3/neutron/agent/linux/ip_lib.py --- neutron-2014.1.2/neutron/agent/linux/ip_lib.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/agent/linux/ip_lib.py 2014-10-02 23:25:23.000000000 +0000 @@ -450,7 +450,8 @@ def delete(self, name): self._as_root('delete', name, use_root_namespace=True) - def execute(self, cmds, addl_env={}, check_exit_code=True): + def execute(self, cmds, addl_env={}, check_exit_code=True, + extra_ok_codes=None): if not self._parent.root_helper: raise exceptions.SudoRequired() ns_params = [] @@ -464,7 +465,7 @@ return utils.execute( ns_params + env_params + list(cmds), root_helper=self._parent.root_helper, - check_exit_code=check_exit_code) + check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes) def exists(self, name): output = self._parent._execute('o', 'netns', ['list']) diff -Nru neutron-2014.1.2/neutron/agent/linux/iptables_firewall.py neutron-2014.1.3/neutron/agent/linux/iptables_firewall.py --- neutron-2014.1.2/neutron/agent/linux/iptables_firewall.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/agent/linux/iptables_firewall.py 2014-10-02 23:25:23.000000000 +0000 @@ -21,6 +21,7 @@ from neutron.agent import firewall from neutron.agent.linux import iptables_manager from neutron.common import constants +from neutron.common import ipv6_utils from neutron.openstack.common import log as logging @@ -43,7 +44,7 @@ def __init__(self): self.iptables = iptables_manager.IptablesManager( root_helper=cfg.CONF.AGENT.root_helper, - use_ipv6=True) + use_ipv6=ipv6_utils.is_enabled()) # list of port which has security group self.filtered_ports = {} self._add_fallback_chain_v4v6() diff -Nru neutron-2014.1.2/neutron/agent/linux/iptables_manager.py neutron-2014.1.3/neutron/agent/linux/iptables_manager.py --- neutron-2014.1.2/neutron/agent/linux/iptables_manager.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/agent/linux/iptables_manager.py 2014-10-02 23:25:23.000000000 +0000 @@ -599,8 +599,10 @@ cmd_tables = [('iptables', key) for key, table in self.ipv4.items() if name in table._select_chain_set(wrap)] - cmd_tables += [('ip6tables', key) for key, table in self.ipv6.items() - if name in table._select_chain_set(wrap)] + if self.use_ipv6: + cmd_tables += [('ip6tables', key) + for key, table in self.ipv6.items() + if name in table._select_chain_set(wrap)] return cmd_tables diff -Nru neutron-2014.1.2/neutron/agent/linux/utils.py neutron-2014.1.3/neutron/agent/linux/utils.py --- neutron-2014.1.2/neutron/agent/linux/utils.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/agent/linux/utils.py 2014-10-02 23:25:23.000000000 +0000 @@ -60,7 +60,7 @@ def execute(cmd, root_helper=None, process_input=None, addl_env=None, - check_exit_code=True, return_stderr=False): + check_exit_code=True, return_stderr=False, extra_ok_codes=None): try: obj, cmd = create_process(cmd, root_helper=root_helper, addl_env=addl_env) @@ -71,7 +71,13 @@ m = _("\nCommand: %(cmd)s\nExit code: %(code)s\nStdout: %(stdout)r\n" "Stderr: %(stderr)r") % {'cmd': cmd, 'code': obj.returncode, 'stdout': _stdout, 'stderr': _stderr} + LOG.debug(m) + + extra_ok_codes = extra_ok_codes or [] + if obj.returncode and obj.returncode in extra_ok_codes: + obj.returncode = None + if obj.returncode and check_exit_code: raise RuntimeError(m) finally: diff -Nru neutron-2014.1.2/neutron/agent/metadata/agent.py neutron-2014.1.3/neutron/agent/metadata/agent.py --- neutron-2014.1.2/neutron/agent/metadata/agent.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/agent/metadata/agent.py 2014-10-02 23:25:23.000000000 +0000 @@ -132,6 +132,7 @@ internal_ports = qclient.list_ports( device_id=router_id, device_owner=n_const.DEVICE_OWNER_ROUTER_INTF)['ports'] + self.auth_info = qclient.get_auth_info() return tuple(p['network_id'] for p in internal_ports) @utils.cache_method_results @@ -145,9 +146,11 @@ """ qclient = self._get_neutron_client() - return qclient.list_ports( + all_ports = qclient.list_ports( network_id=networks, fixed_ips=['ip_address=%s' % remote_address])['ports'] + self.auth_info = qclient.get_auth_info() + return all_ports def _get_ports(self, remote_address, network_id=None, router_id=None): """Search for all ports that contain passed ip address and belongs to @@ -168,15 +171,12 @@ return self._get_ports_for_remote_address(remote_address, networks) def _get_instance_and_tenant_id(self, req): - qclient = self._get_neutron_client() - remote_address = req.headers.get('X-Forwarded-For') network_id = req.headers.get('X-Neutron-Network-ID') router_id = req.headers.get('X-Neutron-Router-ID') ports = self._get_ports(remote_address, network_id, router_id) - self.auth_info = qclient.get_auth_info() if len(ports) == 1: return ports[0]['device_id'], ports[0]['tenant_id'] return None, None diff -Nru neutron-2014.1.2/neutron/api/v2/base.py neutron-2014.1.3/neutron/api/v2/base.py --- neutron-2014.1.2/neutron/api/v2/base.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/api/v2/base.py 2014-10-02 23:25:23.000000000 +0000 @@ -126,41 +126,48 @@ % self._plugin.__class__.__name__) return getattr(self._plugin, native_sorting_attr_name, False) - def _is_visible(self, context, attr_name, data): - action = "%s:%s" % (self._plugin_handlers[self.SHOW], attr_name) - # Optimistically init authz_check to True - authz_check = True - try: - attr = (attributes.RESOURCE_ATTRIBUTE_MAP - [self._collection].get(attr_name)) - if attr and attr.get('enforce_policy'): - authz_check = policy.check_if_exists( - context, action, data) - except KeyError: - # The extension was not configured for adding its resources - # to the global resource attribute map. Policy check should - # not be performed - LOG.debug(_("The resource %(resource)s was not found in the " - "RESOURCE_ATTRIBUTE_MAP; unable to perform authZ " - "check for attribute %(attr)s"), - {'resource': self._collection, - 'attr': attr_name}) - except exceptions.PolicyRuleNotFound: - # Just ignore the exception. Do not even log it, as this will add - # a lot of unnecessary info in the log (and take time as well to - # write info to the logger) - pass - attr_val = self._attr_info.get(attr_name) - return attr_val and attr_val['is_visible'] and authz_check + def _exclude_attributes_by_policy(self, context, data): + """Identifies attributes to exclude according to authZ policies. + + Return a list of attribute names which should be stripped from the + response returned to the user because the user is not authorized + to see them. + """ + attributes_to_exclude = [] + for attr_name in data.keys(): + attr_data = self._attr_info.get(attr_name) + if attr_data and attr_data['is_visible']: + if policy.check( + context, + '%s:%s' % (self._plugin_handlers[self.SHOW], attr_name), + data, + might_not_exist=True): + # this attribute is visible, check next one + continue + # if the code reaches this point then either the policy check + # failed or the attribute was not visible in the first place + attributes_to_exclude.append(attr_name) + return attributes_to_exclude def _view(self, context, data, fields_to_strip=None): - # make sure fields_to_strip is iterable - if not fields_to_strip: - fields_to_strip = [] + """Build a view of an API resource. + :param context: the neutron context + :param data: the object for which a view is being created + :param fields_to_strip: attributes to remove from the view + + :returns: a view of the object which includes only attributes + visible according to API resource declaration and authZ policies. + """ + fields_to_strip = ((fields_to_strip or []) + + self._exclude_attributes_by_policy(context, data)) + return self._filter_attributes(context, data, fields_to_strip) + + def _filter_attributes(self, context, data, fields_to_strip=None): + if not fields_to_strip: + return data return dict(item for item in data.iteritems() - if (self._is_visible(context, item[0], data) and - item[0] not in fields_to_strip)) + if (item[0] not in fields_to_strip)) def _do_field_list(self, original_fields): fields_to_add = None @@ -175,6 +182,8 @@ if name in self._member_actions: def _handle_action(request, id, **kwargs): arg_list = [request.context, id] + # Ensure policy engine is initialized + policy.init() # Fetch the resource and verify if the user can access it try: resource = self._item(request, id, True) @@ -185,9 +194,6 @@ # Explicit comparison with None to distinguish from {} if body is not None: arg_list.append(body) - # TODO(salvatore-orlando): bp/make-authz-ortogonal - # The body of the action request should be included - # in the info passed to the policy engine # It is ok to raise a 403 because accessibility to the # object was checked earlier in this method policy.enforce(request.context, name, resource) @@ -246,14 +252,23 @@ self._plugin_handlers[self.SHOW], obj, plugin=self._plugin)] + # Use the first element in the list for discriminating which attributes + # should be filtered out because of authZ policies + # fields_to_add contains a list of attributes added for request policy + # checks but that were not required by the user. They should be + # therefore stripped + fields_to_strip = fields_to_add or [] + if obj_list: + fields_to_strip += self._exclude_attributes_by_policy( + request.context, obj_list[0]) collection = {self._collection: - [self._view(request.context, obj, - fields_to_strip=fields_to_add) + [self._filter_attributes( + request.context, obj, + fields_to_strip=fields_to_strip) for obj in obj_list]} pagination_links = pagination_helper.get_links(obj_list) if pagination_links: collection[self._collection + "_links"] = pagination_links - return collection def _item(self, request, id, do_authz=False, field_list=None, @@ -284,6 +299,8 @@ def index(self, request, **kwargs): """Returns a list of the requested entity.""" parent_id = kwargs.get(self._parent_id_name) + # Ensure policy engine is initialized + policy.init() return self._items(request, True, parent_id) def show(self, request, id, **kwargs): @@ -295,6 +312,8 @@ field_list, added_fields = self._do_field_list( api_common.list_args(request, "fields")) parent_id = kwargs.get(self._parent_id_name) + # Ensure policy engine is initialized + policy.init() return {self._resource: self._view(request.context, self._item(request, @@ -316,9 +335,12 @@ kwargs = {self._resource: item} if parent_id: kwargs[self._parent_id_name] = parent_id - objs.append(self._view(request.context, - obj_creator(request.context, - **kwargs))) + fields_to_strip = self._exclude_attributes_by_policy( + request.context, item) + objs.append(self._filter_attributes( + request.context, + obj_creator(request.context, **kwargs), + fields_to_strip=fields_to_strip)) return objs # Note(salvatore-orlando): broad catch as in theory a plugin # could raise any kind of exception @@ -363,6 +385,8 @@ else: items = [body] bulk = False + # Ensure policy engine is initialized + policy.init() for item in items: self._validate_network_tenant_ownership(request, item[self._resource]) @@ -405,8 +429,13 @@ # plugin does atomic bulk create operations obj_creator = getattr(self._plugin, "%s_bulk" % action) objs = obj_creator(request.context, body, **kwargs) - return notify({self._collection: [self._view(request.context, obj) - for obj in objs]}) + # Use first element of list to discriminate attributes which + # should be removed because of authZ policies + fields_to_strip = self._exclude_attributes_by_policy( + request.context, objs[0]) + return notify({self._collection: [self._filter_attributes( + request.context, obj, fields_to_strip=fields_to_strip) + for obj in objs]}) else: obj_creator = getattr(self._plugin, action) if self._collection in body: @@ -420,8 +449,8 @@ self._nova_notifier.send_network_change( action, {}, {self._resource: obj}) - return notify({self._resource: self._view(request.context, - obj)}) + return notify({self._resource: self._view( + request.context, obj)}) def delete(self, request, id, **kwargs): """Deletes the specified entity.""" @@ -433,6 +462,7 @@ action = self._plugin_handlers[self.DELETE] # Check authz + policy.init() parent_id = kwargs.get(self._parent_id_name) obj = self._item(request, id, parent_id=parent_id) try: @@ -484,10 +514,16 @@ if (value.get('required_by_policy') or value.get('primary_key') or 'default' not in value)] + # Ensure policy engine is initialized + policy.init() orig_obj = self._item(request, id, field_list=field_list, parent_id=parent_id) orig_object_copy = copy.copy(orig_obj) orig_obj.update(body[self._resource]) + # Make a list of attributes to be updated to inform the policy engine + # which attributes are set explicitly so that it can distinguish them + # from the ones that are set to their default values. + orig_obj[const.ATTRIBUTES_TO_UPDATE] = body[self._resource].keys() try: policy.enforce(request.context, action, diff -Nru neutron-2014.1.2/neutron/common/constants.py neutron-2014.1.3/neutron/common/constants.py --- neutron-2014.1.2/neutron/common/constants.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/common/constants.py 2014-10-02 23:25:23.000000000 +0000 @@ -115,3 +115,5 @@ DHCPV6_STATELESS = 'dhcpv6-stateless' IPV6_SLAAC = 'slaac' IPV6_MODES = [DHCPV6_STATEFUL, DHCPV6_STATELESS, IPV6_SLAAC] + +ATTRIBUTES_TO_UPDATE = 'attributes_to_update' diff -Nru neutron-2014.1.2/neutron/common/exceptions.py neutron-2014.1.3/neutron/common/exceptions.py --- neutron-2014.1.2/neutron/common/exceptions.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/common/exceptions.py 2014-10-02 23:25:23.000000000 +0000 @@ -96,10 +96,6 @@ message = _("Policy configuration policy.json could not be found") -class PolicyRuleNotFound(NotFound): - message = _("Requested rule:%(rule)s cannot be found") - - class PolicyInitError(NeutronException): message = _("Failed to init policy %(policy)s because %(reason)s") diff -Nru neutron-2014.1.2/neutron/common/ipv6_utils.py neutron-2014.1.3/neutron/common/ipv6_utils.py --- neutron-2014.1.2/neutron/common/ipv6_utils.py 2014-08-07 22:55:55.000000000 +0000 +++ neutron-2014.1.3/neutron/common/ipv6_utils.py 2014-10-02 23:25:23.000000000 +0000 @@ -20,6 +20,9 @@ import netaddr +_IS_IPV6_ENABLED = None + + def get_ipv6_addr_by_EUI64(prefix, mac): # Check if the prefix is IPv4 address isIPv4 = netaddr.valid_ipv4(prefix) @@ -37,3 +40,14 @@ except TypeError: raise TypeError(_('Bad prefix type for generate IPv6 address by ' 'EUI-64: %s') % prefix) + + +def is_enabled(): + global _IS_IPV6_ENABLED + + if _IS_IPV6_ENABLED is None: + disabled_ipv6_path = "/proc/sys/net/ipv6/conf/default/disable_ipv6" + with open(disabled_ipv6_path, 'r') as f: + disabled = f.read().strip() + _IS_IPV6_ENABLED = disabled == "0" + return _IS_IPV6_ENABLED diff -Nru neutron-2014.1.2/neutron/db/db_base_plugin_v2.py neutron-2014.1.3/neutron/db/db_base_plugin_v2.py --- neutron-2014.1.2/neutron/db/db_base_plugin_v2.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/db/db_base_plugin_v2.py 2014-10-02 23:25:23.000000000 +0000 @@ -20,6 +20,7 @@ import netaddr from oslo.config import cfg +from sqlalchemy import and_ from sqlalchemy import event from sqlalchemy import orm from sqlalchemy.orm import exc @@ -822,7 +823,12 @@ return ports = self._model_query( context, models_v2.Port).filter( - models_v2.Port.network_id == id) + and_( + models_v2.Port.network_id == id, + models_v2.Port.device_owner != + constants.DEVICE_OWNER_ROUTER_GW, + models_v2.Port.device_owner != + constants.DEVICE_OWNER_FLOATINGIP)) subnets = self._model_query( context, models_v2.Subnet).filter( models_v2.Subnet.network_id == id) diff -Nru neutron-2014.1.2/neutron/db/firewall/firewall_db.py neutron-2014.1.3/neutron/db/firewall/firewall_db.py --- neutron-2014.1.2/neutron/db/firewall/firewall_db.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/db/firewall/firewall_db.py 2014-10-02 23:25:23.000000000 +0000 @@ -342,6 +342,9 @@ LOG.debug(_("create_firewall_rule() called")) fwr = firewall_rule['firewall_rule'] tenant_id = self._get_tenant_id_for_create(context, fwr) + if not fwr['protocol'] and (fwr['source_port'] or + fwr['destination_port']): + raise firewall.FirewallRuleWithPortWithoutProtocolInvalid() src_port_min, src_port_max = self._get_min_max_ports_from_range( fwr['source_port']) dst_port_min, dst_port_max = self._get_min_max_ports_from_range( @@ -383,6 +386,14 @@ del fwr['destination_port'] with context.session.begin(subtransactions=True): fwr_db = self._get_firewall_rule(context, id) + protocol = fwr.get('protocol', fwr_db['protocol']) + if not protocol: + sport = fwr.get('source_port_range_min', + fwr_db['source_port_range_min']) + dport = fwr.get('destination_port_range_min', + fwr_db['destination_port_range_min']) + if sport or dport: + raise firewall.FirewallRuleWithPortWithoutProtocolInvalid() fwr_db.update(fwr) if fwr_db.firewall_policy_id: fwp_db = self._get_firewall_policy(context, diff -Nru neutron-2014.1.2/neutron/db/migration/alembic_migrations/versions/1fcfc149aca4_agents_unique_by_type_and_host.py neutron-2014.1.3/neutron/db/migration/alembic_migrations/versions/1fcfc149aca4_agents_unique_by_type_and_host.py --- neutron-2014.1.2/neutron/db/migration/alembic_migrations/versions/1fcfc149aca4_agents_unique_by_type_and_host.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/db/migration/alembic_migrations/versions/1fcfc149aca4_agents_unique_by_type_and_host.py 2014-10-02 23:25:23.000000000 +0000 @@ -31,6 +31,7 @@ 'neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2', 'neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2', 'neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2', + 'neutron.plugins.ml2.plugin.Ml2Plugin', 'neutron.plugins.nec.nec_plugin.NECPluginV2', 'neutron.plugins.nicira.NeutronPlugin.NvpPluginV2', 'neutron.plugins.nicira.NeutronServicePlugin.NvpAdvancedPlugin', diff -Nru neutron-2014.1.2/neutron/db/migration/migrate_to_ml2.py neutron-2014.1.3/neutron/db/migration/migrate_to_ml2.py --- neutron-2014.1.2/neutron/db/migration/migrate_to_ml2.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/db/migration/migrate_to_ml2.py 2014-10-02 23:25:23.000000000 +0000 @@ -170,7 +170,7 @@ INSERT INTO ml2_vlan_allocations SELECT physical_network, vlan_id, allocated FROM %(source_table)s - WHERE allocated = 1 + WHERE allocated = TRUE """) % {'source_table': self.vlan_allocation_table_name}) def get_port_segment_map(self, engine): @@ -389,7 +389,7 @@ INSERT INTO ml2_gre_allocations SELECT tunnel_id as gre_id, allocated FROM ovs_tunnel_allocations - WHERE allocated = 1 + WHERE allocated = TRUE """) engine.execute(""" INSERT INTO ml2_gre_endpoints @@ -403,7 +403,7 @@ INSERT INTO ml2_vxlan_allocations SELECT tunnel_id as vxlan_vni, allocated FROM ovs_tunnel_allocations - WHERE allocated = 1 + WHERE allocated = TRUE """) engine.execute(sa.text(""" INSERT INTO ml2_vxlan_endpoints diff -Nru neutron-2014.1.2/neutron/debug/shell.py neutron-2014.1.3/neutron/debug/shell.py --- neutron-2014.1.2/neutron/debug/shell.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/debug/shell.py 2014-10-02 23:25:23.000000000 +0000 @@ -25,21 +25,20 @@ from neutron.debug.debug_agent import NeutronDebugAgent from neutron.openstack.common import importutils from neutronclient.common import exceptions as exc -from neutronclient.common import utils from neutronclient.shell import env, NeutronShell, NEUTRON_API_VERSION COMMAND_V2 = { - 'probe-create': utils.import_class( + 'probe-create': importutils.import_class( 'neutron.debug.commands.CreateProbe'), - 'probe-delete': utils.import_class( + 'probe-delete': importutils.import_class( 'neutron.debug.commands.DeleteProbe'), - 'probe-list': utils.import_class( + 'probe-list': importutils.import_class( 'neutron.debug.commands.ListProbe'), - 'probe-clear': utils.import_class( + 'probe-clear': importutils.import_class( 'neutron.debug.commands.ClearProbe'), - 'probe-exec': utils.import_class( + 'probe-exec': importutils.import_class( 'neutron.debug.commands.ExecProbe'), - 'ping-all': utils.import_class( + 'ping-all': importutils.import_class( 'neutron.debug.commands.PingAll'), #TODO(nati) ping, netcat , nmap, bench } diff -Nru neutron-2014.1.2/neutron/extensions/firewall.py neutron-2014.1.3/neutron/extensions/firewall.py --- neutron-2014.1.2/neutron/extensions/firewall.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/extensions/firewall.py 2014-10-02 23:25:23.000000000 +0000 @@ -80,6 +80,10 @@ "Only action values %(values)s are supported.") +class FirewallRuleWithPortWithoutProtocolInvalid(qexception.InvalidInput): + message = _("Source/destination port requires a protocol") + + class FirewallInvalidPortValue(qexception.InvalidInput): message = _("Invalid value for port %(port)s.") diff -Nru neutron-2014.1.2/neutron/extensions/securitygroup.py neutron-2014.1.3/neutron/extensions/securitygroup.py --- neutron-2014.1.2/neutron/extensions/securitygroup.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/extensions/securitygroup.py 2014-10-02 23:25:23.000000000 +0000 @@ -114,7 +114,7 @@ try: val = int(value) if val >= 0 and val <= 255: - return val + return value raise SecurityGroupRuleInvalidProtocol( protocol=value, values=sg_supported_protocols) except (ValueError, TypeError): diff -Nru neutron-2014.1.2/neutron/plugins/bigswitch/db/consistency_db.py neutron-2014.1.3/neutron/plugins/bigswitch/db/consistency_db.py --- neutron-2014.1.2/neutron/plugins/bigswitch/db/consistency_db.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/bigswitch/db/consistency_db.py 2014-10-02 23:25:18.000000000 +0000 @@ -14,7 +14,6 @@ # under the License. import sqlalchemy as sa -from neutron.common import exceptions from neutron.db import api as db from neutron.db import model_base from neutron.openstack.common import log as logging @@ -22,10 +21,6 @@ LOG = logging.getLogger(__name__) -class MultipleReadForUpdateCalls(exceptions.NeutronException): - message = _("Only one read_for_update call may be made at a time.") - - class ConsistencyHash(model_base.BASEV2): ''' A simple table to store the latest consistency hash @@ -41,27 +36,23 @@ class HashHandler(object): ''' - A wrapper object to keep track of the session and hold the SQL - lock between the read and the update to prevent other servers - from reading the hash during a transaction. + A wrapper object to keep track of the session between the read + and the update operations. ''' def __init__(self, context=None, hash_id='1'): self.hash_id = hash_id self.session = db.get_session() if not context else context.session self.hash_db_obj = None - self.transaction = None def read_for_update(self): - if self.transaction: - raise MultipleReadForUpdateCalls() - self.transaction = self.session.begin(subtransactions=True) # REVISIT(kevinbenton): locking here with the DB is prone to deadlocks # in various multi-REST-call scenarios (router intfs, flips, etc). # Since it doesn't work in Galera deployments anyway, another sync # mechanism will have to be introduced to prevent inefficient double # syncs in HA deployments. - res = (self.session.query(ConsistencyHash). - filter_by(hash_id=self.hash_id).first()) + with self.session.begin(subtransactions=True): + res = (self.session.query(ConsistencyHash). + filter_by(hash_id=self.hash_id).first()) if not res: return '' self.hash_db_obj = res @@ -69,14 +60,11 @@ def put_hash(self, hash): hash = hash or '' - if not self.transaction: - self.transaction = self.session.begin(subtransactions=True) - if self.hash_db_obj is not None: - self.hash_db_obj.hash = hash - else: - conhash = ConsistencyHash(hash_id=self.hash_id, hash=hash) - self.session.merge(conhash) - self.transaction.commit() - self.transaction = None + with self.session.begin(subtransactions=True): + if self.hash_db_obj is not None: + self.hash_db_obj.hash = hash + else: + conhash = ConsistencyHash(hash_id=self.hash_id, hash=hash) + self.session.merge(conhash) LOG.debug(_("Consistency hash for group %(hash_id)s updated " "to %(hash)s"), {'hash_id': self.hash_id, 'hash': hash}) diff -Nru neutron-2014.1.2/neutron/plugins/bigswitch/plugin.py neutron-2014.1.3/neutron/plugins/bigswitch/plugin.py --- neutron-2014.1.2/neutron/plugins/bigswitch/plugin.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/bigswitch/plugin.py 2014-10-02 23:25:23.000000000 +0000 @@ -362,8 +362,10 @@ # In ML2, the host_id is already populated if portbindings.HOST_ID in port: hostid = port[portbindings.HOST_ID] - else: + elif 'id' in port: hostid = porttracker_db.get_port_hostid(context, port['id']) + else: + hostid = None if hostid: port[portbindings.HOST_ID] = hostid override = self._check_hostvif_override(hostid) @@ -451,7 +453,9 @@ def put_context_in_serverpool(f): @functools.wraps(f) def wrapper(self, context, *args, **kwargs): - self.servers.set_context(context) + # core plugin: context is top level object + # ml2: keeps context in _plugin_context + self.servers.set_context(getattr(context, '_plugin_context', context)) return f(self, context, *args, **kwargs) return wrapper diff -Nru neutron-2014.1.2/neutron/plugins/bigswitch/servermanager.py neutron-2014.1.3/neutron/plugins/bigswitch/servermanager.py --- neutron-2014.1.2/neutron/plugins/bigswitch/servermanager.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/bigswitch/servermanager.py 2014-10-02 23:25:23.000000000 +0000 @@ -37,6 +37,7 @@ import os import socket import ssl +import time import weakref import eventlet @@ -71,8 +72,11 @@ BASE_URI = '/networkService/v1.1' ORCHESTRATION_SERVICE_ID = 'Neutron v2.0' HASH_MATCH_HEADER = 'X-BSN-BVS-HASH-MATCH' +REQ_CONTEXT_HEADER = 'X-REQ-CONTEXT' # error messages NXNETWORK = 'NXVNS' +HTTP_SERVICE_UNAVAILABLE_RETRY_COUNT = 3 +HTTP_SERVICE_UNAVAILABLE_RETRY_INTERVAL = 3 class RemoteRestError(exceptions.NeutronException): @@ -122,12 +126,11 @@ 'cap': self.capabilities}) return self.capabilities - def rest_call(self, action, resource, data='', headers={}, timeout=False, - reconnect=False, hash_handler=None): + def rest_call(self, action, resource, data='', headers=None, + timeout=False, reconnect=False, hash_handler=None): uri = self.base_uri + resource body = json.dumps(data) - if not headers: - headers = {} + headers = headers or {} headers['Content-type'] = 'application/json' headers['Accept'] = 'application/json' headers['NeutronProxy-Agent'] = self.name @@ -185,10 +188,13 @@ try: self.currentconn.request(action, uri, body, headers) response = self.currentconn.getresponse() - hash_handler.put_hash(response.getheader(HASH_MATCH_HEADER)) respstr = response.read() respdata = respstr if response.status in self.success_codes: + hash_value = response.getheader(HASH_MATCH_HEADER) + # don't clear hash from DB if a hash header wasn't present + if hash_value is not None: + hash_handler.put_hash(hash_value) try: respdata = json.loads(respstr) except ValueError: @@ -409,14 +415,24 @@ @utils.synchronized('bsn-rest-call') def rest_call(self, action, resource, data, headers, ignore_codes, timeout=False): - hash_handler = cdb.HashHandler(context=self.get_context_ref()) + context = self.get_context_ref() + if context: + # include the requesting context information if available + cdict = context.to_dict() + headers[REQ_CONTEXT_HEADER] = json.dumps(cdict) + hash_handler = cdb.HashHandler(context=context) good_first = sorted(self.servers, key=lambda x: x.failed) first_response = None for active_server in good_first: - ret = active_server.rest_call(action, resource, data, headers, - timeout, - reconnect=self.always_reconnect, - hash_handler=hash_handler) + for x in range(HTTP_SERVICE_UNAVAILABLE_RETRY_COUNT + 1): + ret = active_server.rest_call(action, resource, data, headers, + timeout, + reconnect=self.always_reconnect, + hash_handler=hash_handler) + if ret[0] != httplib.SERVICE_UNAVAILABLE: + break + time.sleep(HTTP_SERVICE_UNAVAILABLE_RETRY_INTERVAL) + # If inconsistent, do a full synchronization if ret[0] == httplib.CONFLICT: if not self.get_topo_function: @@ -458,13 +474,15 @@ return first_response def rest_action(self, action, resource, data='', errstr='%s', - ignore_codes=[], headers={}, timeout=False): + ignore_codes=None, headers=None, timeout=False): """ Wrapper for rest_call that verifies success and raises a RemoteRestError on failure with a provided error string By default, 404 errors on DELETE calls are ignored because they already do not exist on the backend. """ + ignore_codes = ignore_codes or [] + headers = headers or {} if not ignore_codes and action == 'DELETE': ignore_codes = [404] resp = self.rest_call(action, resource, data, headers, ignore_codes, diff -Nru neutron-2014.1.2/neutron/plugins/cisco/db/n1kv_db_v2.py neutron-2014.1.3/neutron/plugins/cisco/db/n1kv_db_v2.py --- neutron-2014.1.2/neutron/plugins/cisco/db/n1kv_db_v2.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/cisco/db/n1kv_db_v2.py 2014-10-02 23:25:23.000000000 +0000 @@ -720,6 +720,7 @@ network_id=network_id, port_count=port_count) db_session.add(vm_network) + return vm_network def update_vm_network_port_count(db_session, name, port_count): diff -Nru neutron-2014.1.2/neutron/plugins/cisco/n1kv/n1kv_client.py neutron-2014.1.3/neutron/plugins/cisco/n1kv/n1kv_client.py --- neutron-2014.1.2/neutron/plugins/cisco/n1kv/n1kv_client.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/cisco/n1kv/n1kv_client.py 2014-10-02 23:25:23.000000000 +0000 @@ -274,6 +274,8 @@ 'id': network_profile['id'], 'logicalNetwork': logical_network_name, 'tenantId': tenant_id} + if network_profile['segment_type'] == c_const.NETWORK_TYPE_OVERLAY: + body['subType'] = network_profile['sub_type'] return self._post( self.network_segment_pool_path % network_profile['id'], body=body) @@ -331,6 +333,8 @@ 'dhcp': subnet['enable_dhcp'], 'dnsServersList': subnet['dns_nameservers'], 'networkAddress': network_address, + 'netSegmentName': subnet['network_id'], + 'id': subnet['id'], 'tenantId': subnet['tenant_id']} return self._post(self.ip_pool_path % subnet['id'], body=body) diff -Nru neutron-2014.1.2/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py neutron-2014.1.3/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py --- neutron-2014.1.2/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py 2014-10-02 23:25:23.000000000 +0000 @@ -21,6 +21,8 @@ import eventlet +from oslo.config import cfg as q_conf + from neutron.api.rpc.agentnotifiers import dhcp_rpc_agent_api from neutron.api.rpc.agentnotifiers import l3_rpc_agent_api from neutron.api.v2 import attributes @@ -35,11 +37,13 @@ from neutron.db import dhcp_rpc_base from neutron.db import external_net_db from neutron.db import extraroute_db -from neutron.db import l3_db +from neutron.db import l3_agentschedulers_db from neutron.db import l3_rpc_base from neutron.db import portbindings_db from neutron.extensions import portbindings from neutron.extensions import providernet +from neutron.openstack.common import excutils +from neutron.openstack.common import importutils from neutron.openstack.common import log as logging from neutron.openstack.common import rpc from neutron.openstack.common import uuidutils as uuidutils @@ -78,12 +82,12 @@ class N1kvNeutronPluginV2(db_base_plugin_v2.NeutronDbPluginV2, external_net_db.External_net_db_mixin, extraroute_db.ExtraRoute_db_mixin, - l3_db.L3_NAT_db_mixin, portbindings_db.PortBindingMixin, n1kv_db_v2.NetworkProfile_db_mixin, n1kv_db_v2.PolicyProfile_db_mixin, network_db_v2.Credential_db_mixin, - agentschedulers_db.AgentSchedulerDbMixin): + l3_agentschedulers_db.L3AgentSchedulerDbMixin, + agentschedulers_db.DhcpAgentSchedulerDbMixin): """ Implement the Neutron abstractions using Cisco Nexus1000V. @@ -99,7 +103,9 @@ supported_extension_aliases = ["provider", "agent", "n1kv", "network_profile", "policy_profile", "external-net", "router", - "binding", "credential"] + "binding", "credential", + "l3_agent_scheduler", + "dhcp_agent_scheduler"] def __init__(self, configfile=None): """ @@ -119,6 +125,12 @@ c_cred.Store.initialize() self._setup_vsm() self._setup_rpc() + self.network_scheduler = importutils.import_object( + q_conf.CONF.network_scheduler_driver + ) + self.router_scheduler = importutils.import_object( + q_conf.CONF.router_scheduler_driver + ) def _setup_rpc(self): # RPC support @@ -967,7 +979,8 @@ self._send_create_network_request(context, net, segment_pairs) except(cisco_exceptions.VSMError, cisco_exceptions.VSMConnectionFailed): - super(N1kvNeutronPluginV2, self).delete_network(context, net['id']) + with excutils.save_and_reraise_exception(): + self._delete_network_db(context, net['id']) else: LOG.debug(_("Created network: %s"), net['id']) return net @@ -1039,7 +1052,6 @@ """ session = context.session with session.begin(subtransactions=True): - binding = n1kv_db_v2.get_network_binding(session, id) network = self.get_network(context, id) if n1kv_db_v2.is_trunk_member(session, id): msg = _("Cannot delete network '%s' " @@ -1049,16 +1061,22 @@ msg = _("Cannot delete network '%s' that is a member of a " "multi-segment network") % network['name'] raise n_exc.InvalidInput(error_message=msg) + self._delete_network_db(context, id) + # the network_binding record is deleted via cascade from + # the network record, so explicit removal is not necessary + self._send_delete_network_request(context, network) + LOG.debug("Deleted network: %s", id) + + def _delete_network_db(self, context, id): + session = context.session + with session.begin(subtransactions=True): + binding = n1kv_db_v2.get_network_binding(session, id) if binding.network_type == c_const.NETWORK_TYPE_OVERLAY: n1kv_db_v2.release_vxlan(session, binding.segmentation_id) elif binding.network_type == c_const.NETWORK_TYPE_VLAN: n1kv_db_v2.release_vlan(session, binding.physical_network, binding.segmentation_id) super(N1kvNeutronPluginV2, self).delete_network(context, id) - # the network_binding record is deleted via cascade from - # the network record, so explicit removal is not necessary - self._send_delete_network_request(context, network) - LOG.debug(_("Deleted network: %s"), id) def get_network(self, context, id, fields=None): """ @@ -1113,12 +1131,15 @@ """ p_profile = None port_count = None + vm_network = None vm_network_name = None profile_id_set = False # Set the network policy profile id for auto generated L3/DHCP ports if ('device_id' in port['port'] and port['port']['device_owner'] in - [constants.DEVICE_OWNER_DHCP, constants.DEVICE_OWNER_ROUTER_INTF]): + [constants.DEVICE_OWNER_DHCP, constants.DEVICE_OWNER_ROUTER_INTF, + constants.DEVICE_OWNER_ROUTER_GW, + constants.DEVICE_OWNER_FLOATINGIP]): p_profile_name = c_conf.CISCO_N1K.network_node_policy_profile p_profile = self._get_policy_profile_by_name(p_profile_name) if p_profile: @@ -1156,11 +1177,11 @@ profile_id, pt['network_id']) port_count = 1 - n1kv_db_v2.add_vm_network(context.session, - vm_network_name, - profile_id, - pt['network_id'], - port_count) + vm_network = n1kv_db_v2.add_vm_network(context.session, + vm_network_name, + profile_id, + pt['network_id'], + port_count) else: # Update port count of the VM network. vm_network_name = vm_network['name'] @@ -1182,7 +1203,8 @@ vm_network_name) except(cisco_exceptions.VSMError, cisco_exceptions.VSMConnectionFailed): - super(N1kvNeutronPluginV2, self).delete_port(context, pt['id']) + with excutils.save_and_reraise_exception(): + self._delete_port_db(context, pt, vm_network) else: LOG.debug(_("Created port: %s"), pt) return pt @@ -1221,6 +1243,16 @@ vm_network = n1kv_db_v2.get_vm_network(context.session, port[n1kv.PROFILE_ID], port['network_id']) + router_ids = self.disassociate_floatingips( + context, id, do_notify=False) + self._delete_port_db(context, port, vm_network) + + # now that we've left db transaction, we are safe to notify + self.notify_routers_updated(context, router_ids) + self._send_delete_port_request(context, port, vm_network) + + def _delete_port_db(self, context, port, vm_network): + with context.session.begin(subtransactions=True): vm_network['port_count'] -= 1 n1kv_db_v2.update_vm_network_port_count(context.session, vm_network['name'], @@ -1229,14 +1261,7 @@ n1kv_db_v2.delete_vm_network(context.session, port[n1kv.PROFILE_ID], port['network_id']) - router_ids = self.disassociate_floatingips( - context, id, do_notify=False) - super(N1kvNeutronPluginV2, self).delete_port(context, id) - - # now that we've left db transaction, we are safe to notify - self.notify_routers_updated(context, router_ids) - - self._send_delete_port_request(context, port, vm_network) + super(N1kvNeutronPluginV2, self).delete_port(context, port['id']) def get_port(self, context, id, fields=None): """ @@ -1289,7 +1314,9 @@ self._send_create_subnet_request(context, sub) except(cisco_exceptions.VSMError, cisco_exceptions.VSMConnectionFailed): - super(N1kvNeutronPluginV2, self).delete_subnet(context, sub['id']) + with excutils.save_and_reraise_exception(): + super(N1kvNeutronPluginV2, + self).delete_subnet(context, sub['id']) else: LOG.debug(_("Created subnet: %s"), sub['id']) return sub @@ -1380,17 +1407,17 @@ context.tenant_id) except(cisco_exceptions.VSMError, cisco_exceptions.VSMConnectionFailed): - n1kv_db_v2.delete_profile_binding(context.session, - context.tenant_id, - net_p['id']) + with excutils.save_and_reraise_exception(): + super(N1kvNeutronPluginV2, + self).delete_network_profile(context, net_p['id']) try: self._send_create_network_profile_request(context, net_p) except(cisco_exceptions.VSMError, cisco_exceptions.VSMConnectionFailed): - n1kv_db_v2.delete_profile_binding(context.session, - context.tenant_id, - net_p['id']) - self._send_delete_logical_network_request(net_p) + with excutils.save_and_reraise_exception(): + super(N1kvNeutronPluginV2, + self).delete_network_profile(context, net_p['id']) + self._send_delete_logical_network_request(net_p) return net_p def delete_network_profile(self, context, id): @@ -1423,3 +1450,20 @@ network_profile)) self._send_update_network_profile_request(net_p) return net_p + + def create_router(self, context, router): + """ + Handle creation of router. + + Schedule router to L3 agent as part of the create handling. + :param context: neutron api request context + :param router: router dictionary + :returns: router object + """ + session = context.session + with session.begin(subtransactions=True): + rtr = (super(N1kvNeutronPluginV2, self). + create_router(context, router)) + LOG.debug(_("Scheduling router %s"), rtr['id']) + self.schedule_router(context, rtr['id']) + return rtr diff -Nru neutron-2014.1.2/neutron/plugins/hyperv/agent/hyperv_neutron_agent.py neutron-2014.1.3/neutron/plugins/hyperv/agent/hyperv_neutron_agent.py --- neutron-2014.1.2/neutron/plugins/hyperv/agent/hyperv_neutron_agent.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/hyperv/agent/hyperv_neutron_agent.py 2014-10-02 23:25:23.000000000 +0000 @@ -118,8 +118,8 @@ class HyperVNeutronAgent(object): - # Set RPC API version to 1.0 by default. - RPC_API_VERSION = '1.0' + # Set RPC API version to 1.1 by default. + RPC_API_VERSION = '1.1' def __init__(self): self._utils = utilsfactory.get_hypervutils() diff -Nru neutron-2014.1.2/neutron/plugins/hyperv/agent/utils.py neutron-2014.1.3/neutron/plugins/hyperv/agent/utils.py --- neutron-2014.1.2/neutron/plugins/hyperv/agent/utils.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/hyperv/agent/utils.py 2014-10-02 23:25:23.000000000 +0000 @@ -169,6 +169,9 @@ msg=_('Failed creating port for %s') % vswitch_name) return new_port + def remove_all_security_rules(self, switch_port_name): + pass + def disconnect_switch_port( self, vswitch_name, switch_port_name, delete_port): """Disconnects the switch port.""" diff -Nru neutron-2014.1.2/neutron/plugins/ml2/drivers/mechanism_odl.py neutron-2014.1.3/neutron/plugins/ml2/drivers/mechanism_odl.py --- neutron-2014.1.2/neutron/plugins/ml2/drivers/mechanism_odl.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/ml2/drivers/mechanism_odl.py 2014-10-02 23:25:23.000000000 +0000 @@ -39,10 +39,6 @@ ODL_PORT = 'port' ODL_PORTS = 'ports' -not_found_exception_map = {ODL_NETWORKS: n_exc.NetworkNotFound, - ODL_SUBNETS: n_exc.SubnetNotFound, - ODL_PORTS: n_exc.PortNotFound} - odl_opts = [ cfg.StrOpt('url', help=_("HTTP URL of OpenDaylight REST interface.")), @@ -68,6 +64,10 @@ pass +class OpendaylightAuthError(n_exc.NeutronException): + message = '%(msg)s' + + class JsessionId(requests.auth.AuthBase): """Attaches the JSESSIONID and JSESSIONIDSSO cookies to an HTTP Request. @@ -95,8 +95,16 @@ def obtain_auth_cookies(self): """Make a REST call to obtain cookies for ODL authenticiation.""" - r = requests.get(self.url, auth=(self.username, self.password)) - r.raise_for_status() + try: + r = requests.get(self.url, auth=(self.username, self.password)) + r.raise_for_status() + except requests.exceptions.HTTPError as e: + raise OpendaylightAuthError(msg=_("Failed to authenticate with " + "OpenDaylight: %s") % e) + except requests.exceptions.Timeout as e: + raise OpendaylightAuthError(msg=_("Authentication Timed" + " Out: %s") % e) + jsessionid = r.cookies.get('JSESSIONID') jsessionidsso = r.cookies.get('JSESSIONIDSSO') if jsessionid and jsessionidsso: @@ -167,7 +175,7 @@ if self.out_of_sync: self.sync_full(context) else: - self.sync_object(operation, object_type, context) + self.sync_single_resource(operation, object_type, context) def filter_create_network_attributes(self, network, context, dbcontext): """Filter out network attributes not required for a create.""" @@ -199,21 +207,24 @@ urlpath = collection_name + '/' + resource['id'] self.sendjson('get', urlpath, None) except requests.exceptions.HTTPError as e: - if e.response.status_code == 404: - attr_filter(resource, context, dbcontext) - to_be_synced.append(resource) + with excutils.save_and_reraise_exception() as ctx: + if e.response.status_code == requests.codes.not_found: + attr_filter(resource, context, dbcontext) + to_be_synced.append(resource) + ctx.reraise = False key = resource_name if len(to_be_synced) == 1 else collection_name # 400 errors are returned if an object exists, which we ignore. - self.sendjson('post', collection_name, {key: to_be_synced}, [400]) + self.sendjson('post', collection_name, {key: to_be_synced}, + [requests.codes.bad_request]) @utils.synchronized('odl-sync-full') def sync_full(self, context): """Resync the entire database to ODL. Transition to the in-sync state on success. - Note: we only allow a single thead in here at a time. + Note: we only allow a single thread in here at a time. """ if not self.out_of_sync: return @@ -256,49 +267,34 @@ ODL_SUBNETS: filter_update_subnet_attributes, ODL_PORTS: filter_update_port_attributes} - def sync_single_resource(self, operation, object_type, obj_id, - context, attr_filter_create, attr_filter_update): + def sync_single_resource(self, operation, object_type, context): """Sync over a single resource from Neutron to OpenDaylight. Handle syncing a single operation over to OpenDaylight, and correctly filter attributes out which are not required for the requisite operation (create or update) being handled. """ - dbcontext = context._plugin_context - if operation == 'create': - urlpath = object_type - method = 'post' - else: - urlpath = object_type + '/' + obj_id - method = 'put' - try: - obj_getter = getattr(context._plugin, 'get_%s' % object_type[:-1]) - resource = obj_getter(dbcontext, obj_id) - except not_found_exception_map[object_type]: - LOG.debug(_('%(object_type)s not found (%(obj_id)s)'), - {'object_type': object_type.capitalize(), - 'obj_id': obj_id}) - else: - if operation == 'create': - attr_filter_create(self, resource, context, dbcontext) - elif operation == 'update': - attr_filter_update(self, resource, context, dbcontext) - try: + obj_id = context.current['id'] + if operation == 'delete': + self.sendjson('delete', object_type + '/' + obj_id, None) + else: + if operation == 'create': + urlpath = object_type + method = 'post' + attr_filter = self.create_object_map[object_type] + elif operation == 'update': + urlpath = object_type + '/' + obj_id + method = 'put' + attr_filter = self.update_object_map[object_type] + resource = context.current.copy() + attr_filter(self, resource, context, context._plugin_context) # 400 errors are returned if an object exists, which we ignore. self.sendjson(method, urlpath, {object_type[:-1]: resource}, - [400]) - except Exception: - with excutils.save_and_reraise_exception(): - self.out_of_sync = True - - def sync_object(self, operation, object_type, context): - """Synchronize the single modified record to ODL.""" - obj_id = context.current['id'] - - self.sync_single_resource(operation, object_type, obj_id, context, - self.create_object_map[object_type], - self.update_object_map[object_type]) + [requests.codes.bad_request]) + except Exception: + with excutils.save_and_reraise_exception(): + self.out_of_sync = True def add_security_groups(self, context, dbcontext, port): """Populate the 'security_groups' field with entire records.""" diff -Nru neutron-2014.1.2/neutron/plugins/ml2/drivers/mech_bigswitch/driver.py neutron-2014.1.3/neutron/plugins/ml2/drivers/mech_bigswitch/driver.py --- neutron-2014.1.2/neutron/plugins/ml2/drivers/mech_bigswitch/driver.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/ml2/drivers/mech_bigswitch/driver.py 2014-10-02 23:25:23.000000000 +0000 @@ -26,15 +26,16 @@ from neutron.extensions import portbindings from neutron.openstack.common import log from neutron.plugins.bigswitch import config as pl_config -from neutron.plugins.bigswitch.plugin import NeutronRestProxyV2Base +from neutron.plugins.bigswitch import plugin from neutron.plugins.bigswitch import servermanager from neutron.plugins.ml2 import driver_api as api LOG = log.getLogger(__name__) +put_context_in_serverpool = plugin.put_context_in_serverpool -class BigSwitchMechanismDriver(NeutronRestProxyV2Base, +class BigSwitchMechanismDriver(plugin.NeutronRestProxyV2Base, api.MechanismDriver): """Mechanism Driver for Big Switch Networks Controller. @@ -61,18 +62,22 @@ self.segmentation_types = ', '.join(cfg.CONF.ml2.type_drivers) LOG.debug(_("Initialization done")) + @put_context_in_serverpool def create_network_postcommit(self, context): # create network on the network controller self._send_create_network(context.current) + @put_context_in_serverpool def update_network_postcommit(self, context): # update network on the network controller self._send_update_network(context.current) + @put_context_in_serverpool def delete_network_postcommit(self, context): # delete network on the network controller self._send_delete_network(context.current) + @put_context_in_serverpool def create_port_postcommit(self, context): # create port on the network controller port = self._prepare_port_for_controller(context) @@ -80,6 +85,7 @@ self.async_port_create(port["network"]["tenant_id"], port["network"]["id"], port) + @put_context_in_serverpool def update_port_postcommit(self, context): # update port on the network controller port = self._prepare_port_for_controller(context) @@ -87,6 +93,7 @@ self.servers.rest_update_port(port["network"]["tenant_id"], port["network"]["id"], port) + @put_context_in_serverpool def delete_port_postcommit(self, context): # delete port on the network controller port = context.current diff -Nru neutron-2014.1.2/neutron/plugins/ml2/managers.py neutron-2014.1.3/neutron/plugins/ml2/managers.py --- neutron-2014.1.2/neutron/plugins/ml2/managers.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/ml2/managers.py 2014-10-02 23:25:23.000000000 +0000 @@ -98,6 +98,16 @@ def release_segment(self, session, segment): network_type = segment.get(api.NETWORK_TYPE) driver = self.drivers.get(network_type) + # ML2 may have been reconfigured since the segment was created, + # so a driver may no longer exist for this network_type. + # REVISIT: network_type-specific db entries may become orphaned + # if a network is deleted and the driver isn't available to release + # the segment. This may be fixed with explicit foreign-key references + # or consistency checks on driver initialization. + if not driver: + LOG.error(_("Failed to release segment '%s' because " + "network type is not supported."), segment) + return driver.obj.release_segment(session, segment) diff -Nru neutron-2014.1.2/neutron/plugins/openvswitch/ovs_neutron_plugin.py neutron-2014.1.3/neutron/plugins/openvswitch/ovs_neutron_plugin.py --- neutron-2014.1.2/neutron/plugins/openvswitch/ovs_neutron_plugin.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/openvswitch/ovs_neutron_plugin.py 2014-10-02 23:25:23.000000000 +0000 @@ -44,6 +44,7 @@ from neutron.extensions import extra_dhcp_opt as edo_ext from neutron.extensions import portbindings from neutron.extensions import providernet as provider +from neutron.extensions import securitygroup as ext_sg from neutron import manager from neutron.openstack.common import importutils from neutron.openstack.common import log as logging @@ -604,8 +605,9 @@ need_port_update_notify |= self._update_extra_dhcp_opts_on_port( context, id, port, updated_port) - need_port_update_notify |= self.is_security_group_member_updated( + secgrp_member_updated = self.is_security_group_member_updated( context, original_port, updated_port) + need_port_update_notify |= secgrp_member_updated if original_port['admin_state_up'] != updated_port['admin_state_up']: need_port_update_notify = True @@ -616,6 +618,14 @@ binding.network_type, binding.segmentation_id, binding.physical_network) + + if secgrp_member_updated: + old_set = set(original_port.get(ext_sg.SECURITYGROUPS)) + new_set = set(updated_port.get(ext_sg.SECURITYGROUPS)) + self.notifier.security_groups_member_updated( + context, + old_set ^ new_set) + return updated_port def delete_port(self, context, id, l3_port_check=True): diff -Nru neutron-2014.1.2/neutron/plugins/vmware/api_client/request.py neutron-2014.1.3/neutron/plugins/vmware/api_client/request.py --- neutron-2014.1.2/neutron/plugins/vmware/api_client/request.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/vmware/api_client/request.py 2014-10-02 23:25:23.000000000 +0000 @@ -88,8 +88,10 @@ return error url = self._url - LOG.debug(_("[%(rid)d] Issuing - request %(conn)s"), - {'rid': self._rid(), 'conn': self._request_str(conn, url)}) + LOG.debug(_("[%(rid)d] Issuing - request url: %(conn)s " + "body: %(body)s"), + {'rid': self._rid(), 'conn': self._request_str(conn, url), + 'body': self._body}) issued_time = time.time() is_conn_error = False is_conn_service_unavail = False diff -Nru neutron-2014.1.2/neutron/plugins/vmware/nsxlib/router.py neutron-2014.1.3/neutron/plugins/vmware/nsxlib/router.py --- neutron-2014.1.2/neutron/plugins/vmware/nsxlib/router.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/vmware/nsxlib/router.py 2014-10-02 23:25:23.000000000 +0000 @@ -544,6 +544,7 @@ def delete_nat_rules_by_match(cluster, router_id, rule_type, max_num_expected, min_num_expected=0, + raise_on_len_mismatch=True, **kwargs): # remove nat rules nat_rules = query_nat_rules(cluster, router_id) @@ -557,14 +558,26 @@ break else: to_delete_ids.append(r['uuid']) - if not (len(to_delete_ids) in - range(min_num_expected, max_num_expected + 1)): - raise nsx_exc.NatRuleMismatch(actual_rules=len(to_delete_ids), - min_rules=min_num_expected, - max_rules=max_num_expected) + num_rules_to_delete = len(to_delete_ids) + if (num_rules_to_delete < min_num_expected or + num_rules_to_delete > max_num_expected): + if raise_on_len_mismatch: + raise nsx_exc.NatRuleMismatch(actual_rules=num_rules_to_delete, + min_rules=min_num_expected, + max_rules=max_num_expected) + else: + LOG.warn(_("Found %(actual_rule_num)d matching NAT rules, which " + "is not in the expected range (%(min_exp_rule_num)d," + "%(max_exp_rule_num)d)"), + {'actual_rule_num': num_rules_to_delete, + 'min_exp_rule_num': min_num_expected, + 'max_exp_rule_num': max_num_expected}) for rule_id in to_delete_ids: delete_router_nat_rule(cluster, router_id, rule_id) + # Return number of deleted rules - useful at least for + # testing purposes + return num_rules_to_delete def delete_router_nat_rule(cluster, router_id, rule_id): diff -Nru neutron-2014.1.2/neutron/plugins/vmware/plugins/base.py neutron-2014.1.3/neutron/plugins/vmware/plugins/base.py --- neutron-2014.1.2/neutron/plugins/vmware/plugins/base.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/vmware/plugins/base.py 2014-10-02 23:25:23.000000000 +0000 @@ -298,6 +298,7 @@ routerlib.delete_nat_rules_by_match( self.cluster, nsx_router_id, "SourceNatRule", max_num_expected=1, min_num_expected=0, + raise_on_len_mismatch=False, source_ip_addresses=cidr) if add_snat_rules: ip_addresses = self._build_ip_address_list( @@ -1153,7 +1154,7 @@ port_data[addr_pair.ADDRESS_PAIRS]) else: # remove ATTR_NOT_SPECIFIED - port_data[addr_pair.ADDRESS_PAIRS] = None + port_data[addr_pair.ADDRESS_PAIRS] = [] # security group extension checks if port_security and has_ip: @@ -1682,6 +1683,7 @@ routerlib.delete_nat_rules_by_match( self.cluster, nsx_router_id, "SourceNatRule", max_num_expected=1, min_num_expected=1, + raise_on_len_mismatch=False, source_ip_addresses=subnet['cidr']) def add_router_interface(self, context, router_id, interface_info): @@ -1786,6 +1788,7 @@ routerlib.delete_nat_rules_by_match( self.cluster, nsx_router_id, "NoSourceNatRule", max_num_expected=1, min_num_expected=0, + raise_on_len_mismatch=False, destination_ip_addresses=subnet['cidr']) except n_exc.NotFound: LOG.error(_("Logical router resource %s not found " @@ -2007,7 +2010,11 @@ except n_exc.NotFound: LOG.warning(_("Nat rules not found in nsx for port: %s"), id) - super(NsxPluginV2, self).disassociate_floatingips(context, port_id) + # NOTE(ihrachys): L3 agent notifications don't make sense for + # NSX VMWare plugin since there is no L3 agent in such setup, so + # disabling them here. + super(NsxPluginV2, self).disassociate_floatingips( + context, port_id, do_notify=False) def create_network_gateway(self, context, network_gateway): """Create a layer-2 network gateway. diff -Nru neutron-2014.1.2/neutron/plugins/vmware/vshield/edge_appliance_driver.py neutron-2014.1.3/neutron/plugins/vmware/vshield/edge_appliance_driver.py --- neutron-2014.1.2/neutron/plugins/vmware/vshield/edge_appliance_driver.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/plugins/vmware/vshield/edge_appliance_driver.py 2014-10-02 23:25:23.000000000 +0000 @@ -65,7 +65,7 @@ edge['appliances']['deploymentContainerId'] = ( deployment_container_id) if datacenter_moid: - edge['datacenterMoid'] = datacenter_moid, + edge['datacenterMoid'] = datacenter_moid return edge diff -Nru neutron-2014.1.2/neutron/policy.py neutron-2014.1.3/neutron/policy.py --- neutron-2014.1.2/neutron/policy.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/policy.py 2014-10-02 23:25:23.000000000 +0000 @@ -18,12 +18,15 @@ """ Policy engine for neutron. Largely copied from nova. """ + +import collections import itertools import re from oslo.config import cfg from neutron.api.v2 import attributes +from neutron.common import constants as const from neutron.common import exceptions import neutron.common.utils as utils from neutron import manager @@ -119,14 +122,28 @@ policy.set_rules(policies) -def _is_attribute_explicitly_set(attribute_name, resource, target): - """Verify that an attribute is present and has a non-default value.""" +def _is_attribute_explicitly_set(attribute_name, resource, target, action): + """Verify that an attribute is present and is explicitly set.""" + if 'update' in action: + # In the case of update, the function should not pay attention to a + # default value of an attribute, but check whether it was explicitly + # marked as being updated instead. + return (attribute_name in target[const.ATTRIBUTES_TO_UPDATE] and + target[attribute_name] is not attributes.ATTR_NOT_SPECIFIED) return ('default' in resource[attribute_name] and attribute_name in target and target[attribute_name] is not attributes.ATTR_NOT_SPECIFIED and target[attribute_name] != resource[attribute_name]['default']) +def _should_validate_sub_attributes(attribute, sub_attr): + """Verify that sub-attributes are iterable and should be validated.""" + validate = attribute.get('validate') + return (validate and isinstance(sub_attr, collections.Iterable) and + any([k.startswith('type:dict') and + v for (k, v) in validate.iteritems()])) + + def _build_subattr_match_rule(attr_name, attr, action, target): """Create the rule to match for sub-attribute policy checks.""" # TODO(salv-orlando): Instead of relying on validator info, introduce @@ -164,7 +181,6 @@ action is being executed (e.g.: create_router:external_gateway_info:network_id) """ - match_rule = policy.RuleCheck('rule', action) resource, is_write = get_resource_and_action(action) # Attribute-based checks shall not be enforced on GETs @@ -175,16 +191,14 @@ for attribute_name in res_map[resource]: if _is_attribute_explicitly_set(attribute_name, res_map[resource], - target): + target, action): attribute = res_map[resource][attribute_name] if 'enforce_policy' in attribute: attr_rule = policy.RuleCheck('rule', '%s:%s' % (action, attribute_name)) - # Build match entries for sub-attributes, if present - validate = attribute.get('validate') - if (validate and any([k.startswith('type:dict') and v - for (k, v) in - validate.iteritems()])): + # Build match entries for sub-attributes + if _should_validate_sub_attributes( + attribute, target[attribute_name]): attr_rule = policy.AndCheck( [attr_rule, _build_subattr_match_rule( attribute_name, attribute, @@ -317,7 +331,6 @@ def _prepare_check(context, action, target): """Prepare rule, target, and credentials for the policy engine.""" - init() # Compare with None to distinguish case in which target is {} if target is None: target = {} @@ -326,7 +339,7 @@ return match_rule, target, credentials -def check(context, action, target, plugin=None): +def check(context, action, target, plugin=None, might_not_exist=False): """Verifies that the action is valid on the target in this context. :param context: neutron context @@ -337,25 +350,14 @@ location of the object e.g. ``{'project_id': context.project_id}`` :param plugin: currently unused and deprecated. Kept for backward compatibility. + :param might_not_exist: If True the policy check is skipped (and the + function returns True) if the specified policy does not exist. + Defaults to false. :return: Returns True if access is permitted else False. """ - return policy.check(*(_prepare_check(context, action, target))) - - -def check_if_exists(context, action, target): - """Verify if the action can be authorized, and raise if it is unknown. - - Check whether the action can be performed on the target within this - context, and raise a PolicyRuleNotFound exception if the action is - not defined in the policy engine. - """ - # TODO(salvatore-orlando): Consider modifying oslo policy engine in - # order to allow to raise distinct exception when check fails and - # when policy is missing - # Raise if there's no match for requested action in the policy engine - if not policy._rules or action not in policy._rules: - raise exceptions.PolicyRuleNotFound(rule=action) + if might_not_exist and not (policy._rules and action in policy._rules): + return True return policy.check(*(_prepare_check(context, action, target))) @@ -374,7 +376,6 @@ :raises neutron.exceptions.PolicyNotAuthorized: if verification fails. """ - init() rule, target, credentials = _prepare_check(context, action, target) result = policy.check(rule, target, credentials, action=action) if not result: diff -Nru neutron-2014.1.2/neutron/services/metering/drivers/iptables/iptables_driver.py neutron-2014.1.3/neutron/services/metering/drivers/iptables/iptables_driver.py --- neutron-2014.1.2/neutron/services/metering/drivers/iptables/iptables_driver.py 2014-08-07 22:55:55.000000000 +0000 +++ neutron-2014.1.3/neutron/services/metering/drivers/iptables/iptables_driver.py 2014-10-02 23:25:23.000000000 +0000 @@ -20,6 +20,7 @@ from neutron.agent.linux import interface from neutron.agent.linux import iptables_manager from neutron.common import constants as constants +from neutron.common import ipv6_utils from neutron.common import log from neutron.openstack.common import importutils from neutron.openstack.common import log as logging @@ -74,7 +75,8 @@ self.iptables_manager = iptables_manager.IptablesManager( root_helper=self.root_helper, namespace=self.ns_name, - binary_name=WRAP_NAME) + binary_name=WRAP_NAME, + use_ipv6=ipv6_utils.is_enabled()) self.metering_labels = {} diff -Nru neutron-2014.1.2/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py neutron-2014.1.3/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py --- neutron-2014.1.2/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py 2014-10-02 23:25:23.000000000 +0000 @@ -111,7 +111,10 @@ while True: output = list(self.monitor.iter_stdout()) if output: - return output[0] + # Output[0] is header row with spaces for column separation. + # The column widths can vary depending on the data in the + # columns, so compress multiple spaces to one for testing. + return ' '.join(output[0].split()) eventlet.sleep(0.01) def test_killed_monitor_respawns(self): diff -Nru neutron-2014.1.2/neutron/tests/functional/contrib/filters.template neutron-2014.1.3/neutron/tests/functional/contrib/filters.template --- neutron-2014.1.2/neutron/tests/functional/contrib/filters.template 1970-01-01 00:00:00.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/functional/contrib/filters.template 2014-10-02 23:25:23.000000000 +0000 @@ -0,0 +1,12 @@ +# neutron-rootwrap command filters to support functional testing. It +# is NOT intended to be used outside of a test environment. +# +# This file should be owned by (and only-writeable by) the root user + +[Filters] +# '$BASE_PATH' is intended to be replaced with the expected tox path +# (e.g. /opt/stack/new/neutron/.tox/dsvm-functional) by the neutron +# functional jenkins job. This ensures that tests can kill the +# processes that they launch with their containing tox environment's +# python. +kill_tox_python: KillFilter, root, $BASE_PATH/bin/python, -9 diff -Nru neutron-2014.1.2/neutron/tests/functional/contrib/gate_hook.sh neutron-2014.1.3/neutron/tests/functional/contrib/gate_hook.sh --- neutron-2014.1.2/neutron/tests/functional/contrib/gate_hook.sh 1970-01-01 00:00:00.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/functional/contrib/gate_hook.sh 2014-10-02 23:25:18.000000000 +0000 @@ -0,0 +1,12 @@ +#!/bin/bash + +set -ex + +$BASE/new/devstack-gate/devstack-vm-gate.sh + +# Add a rootwrap filter to support test-only +# configuration (e.g. a KillFilter for processes that +# use the python installed in a tox env). +FUNC_FILTER=$BASE/new/neutron/neutron/tests/functional/contrib/filters.template +sed -e "s+\$BASE_PATH+$BASE/new/neutron/.tox/dsvm-functional+" \ + $FUNC_FILTER | sudo tee /etc/neutron/rootwrap.d/functional.filters > /dev/null diff -Nru neutron-2014.1.2/neutron/tests/functional/contrib/post_test_hook.sh neutron-2014.1.3/neutron/tests/functional/contrib/post_test_hook.sh --- neutron-2014.1.2/neutron/tests/functional/contrib/post_test_hook.sh 1970-01-01 00:00:00.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/functional/contrib/post_test_hook.sh 2014-10-02 23:25:18.000000000 +0000 @@ -0,0 +1,11 @@ +#!/bin/bash + +set -xe + +NEUTRON_DIR=$BASE/new/neutron + +# Run tests as the stack user to allow sudo+rootwrap. +sudo chown -R stack:stack $NEUTRON_DIR +cd $NEUTRON_DIR +echo "Running neutron functional test suite" +sudo -H -u stack tox -e dsvm-functional diff -Nru neutron-2014.1.2/neutron/tests/functional/contrib/README neutron-2014.1.3/neutron/tests/functional/contrib/README --- neutron-2014.1.2/neutron/tests/functional/contrib/README 1970-01-01 00:00:00.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/functional/contrib/README 2014-10-02 23:25:18.000000000 +0000 @@ -0,0 +1,3 @@ +The files in this directory are intended for use by the +neutron-dsvm-functional infra jobs that run the functional test suite +in the gate. diff -Nru neutron-2014.1.2/neutron/tests/unit/bigswitch/test_restproxy_plugin.py neutron-2014.1.3/neutron/tests/unit/bigswitch/test_restproxy_plugin.py --- neutron-2014.1.2/neutron/tests/unit/bigswitch/test_restproxy_plugin.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/bigswitch/test_restproxy_plugin.py 2014-10-02 23:25:23.000000000 +0000 @@ -79,6 +79,12 @@ super(TestBigSwitchProxyPortsV2, self).setUp(self._plugin_name) + def test_get_ports_no_id(self): + with self.port(name='test'): + ports = NeutronManager.get_plugin().get_ports( + context.get_admin_context(), fields=['name']) + self.assertEqual(['name'], ports[0].keys()) + def test_update_port_status_build(self): with self.port() as port: self.assertEqual(port['port']['status'], 'BUILD') diff -Nru neutron-2014.1.2/neutron/tests/unit/bigswitch/test_servermanager.py neutron-2014.1.3/neutron/tests/unit/bigswitch/test_servermanager.py --- neutron-2014.1.2/neutron/tests/unit/bigswitch/test_servermanager.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/bigswitch/test_servermanager.py 2014-10-02 23:25:23.000000000 +0000 @@ -21,8 +21,10 @@ import mock from oslo.config import cfg +from neutron import context from neutron.manager import NeutronManager from neutron.openstack.common import importutils +from neutron.openstack.common import jsonutils from neutron.plugins.bigswitch import servermanager from neutron.tests.unit.bigswitch import test_restproxy_plugin as test_rp @@ -80,6 +82,51 @@ rmock.assert_called_with('GET', '/health', '', {}, [], False) self.assertEqual(1, len(lmock.mock_calls)) + def test_consistency_hash_header(self): + # mock HTTP class instead of rest_call so we can see headers + with mock.patch(HTTPCON) as conmock: + rv = conmock.return_value + rv.getresponse.return_value.getheader.return_value = 'HASHHEADER' + rv.getresponse.return_value.status = 200 + rv.getresponse.return_value.read.return_value = '' + with self.network(): + callheaders = rv.request.mock_calls[0][1][3] + self.assertIn('X-BSN-BVS-HASH-MATCH', callheaders) + # first call will be empty to indicate no previous state hash + self.assertEqual(callheaders['X-BSN-BVS-HASH-MATCH'], '') + # change the header that will be received on delete call + rv.getresponse.return_value.getheader.return_value = 'HASH2' + + # net delete should have used header received on create + callheaders = rv.request.mock_calls[1][1][3] + self.assertEqual(callheaders['X-BSN-BVS-HASH-MATCH'], 'HASHHEADER') + + # create again should now use header received from prev delete + with self.network(): + callheaders = rv.request.mock_calls[2][1][3] + self.assertIn('X-BSN-BVS-HASH-MATCH', callheaders) + self.assertEqual(callheaders['X-BSN-BVS-HASH-MATCH'], + 'HASH2') + + def test_consistency_hash_header_no_update_on_bad_response(self): + # mock HTTP class instead of rest_call so we can see headers + with mock.patch(HTTPCON) as conmock: + rv = conmock.return_value + rv.getresponse.return_value.getheader.return_value = 'HASHHEADER' + rv.getresponse.return_value.status = 200 + rv.getresponse.return_value.read.return_value = '' + with self.network(): + # change the header that will be received on delete call + rv.getresponse.return_value.getheader.return_value = 'EVIL' + rv.getresponse.return_value.status = 'GARBAGE' + + # create again should not use header from delete call + with self.network(): + callheaders = rv.request.mock_calls[2][1][3] + self.assertIn('X-BSN-BVS-HASH-MATCH', callheaders) + self.assertEqual(callheaders['X-BSN-BVS-HASH-MATCH'], + 'HASHHEADER') + def test_file_put_contents(self): pl = NeutronManager.get_plugin() with mock.patch(SERVERMANAGER + '.open', create=True) as omock: @@ -109,6 +156,21 @@ mock.call.write('certdata') ]) + def test_req_context_header(self): + sp = NeutronManager.get_plugin().servers + ncontext = context.Context('uid', 'tid') + sp.set_context(ncontext) + with mock.patch(HTTPCON) as conmock: + rv = conmock.return_value + rv.getresponse.return_value.getheader.return_value = 'HASHHEADER' + sp.rest_action('GET', '/') + callheaders = rv.request.mock_calls[0][1][3] + self.assertIn(servermanager.REQ_CONTEXT_HEADER, callheaders) + ctxdct = ncontext.to_dict() + self.assertEqual( + ctxdct, jsonutils.loads( + callheaders[servermanager.REQ_CONTEXT_HEADER])) + def test_capabilities_retrieval(self): sp = servermanager.ServerPool() with mock.patch(HTTPCON) as conmock: @@ -186,6 +248,26 @@ resp = sp.servers[0].rest_call('GET', '/') self.assertEqual(resp, (0, None, None, None)) + def test_retry_on_unavailable(self): + pl = NeutronManager.get_plugin() + with nested( + mock.patch(SERVERMANAGER + '.ServerProxy.rest_call', + return_value=(httplib.SERVICE_UNAVAILABLE, 0, 0, 0)), + mock.patch(SERVERMANAGER + '.time.sleep') + ) as (srestmock, tmock): + # making a call should trigger retries with sleeps in between + pl.servers.rest_call('GET', '/', '', None, []) + rest_call = [mock.call('GET', '/', '', None, False, reconnect=True, + hash_handler=mock.ANY)] + rest_call_count = ( + servermanager.HTTP_SERVICE_UNAVAILABLE_RETRY_COUNT + 1) + srestmock.assert_has_calls(rest_call * rest_call_count) + sleep_call = [mock.call( + servermanager.HTTP_SERVICE_UNAVAILABLE_RETRY_INTERVAL)] + # should sleep 1 less time than the number of calls + sleep_call_count = rest_call_count - 1 + tmock.assert_has_calls(sleep_call * sleep_call_count) + class TestSockets(test_rp.BigSwitchProxyPluginV2TestCase): diff -Nru neutron-2014.1.2/neutron/tests/unit/cisco/n1kv/fake_client.py neutron-2014.1.3/neutron/tests/unit/cisco/n1kv/fake_client.py --- neutron-2014.1.2/neutron/tests/unit/cisco/n1kv/fake_client.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/cisco/n1kv/fake_client.py 2014-10-02 23:25:23.000000000 +0000 @@ -28,7 +28,11 @@ 'networkSegment', 'portProfile', 'portProfileId', 'tenantId', 'portId', 'macAddress', - 'ipAddress', 'subnetId']} + 'ipAddress', 'subnetId'], + 'subnet': ['addressRangeStart', 'addressRangeEnd', + 'ipAddressSubnet', 'description', 'gateway', + 'dhcp', 'dnsServersList', 'networkAddress', + 'netSegmentName', 'id', 'tenantId']} class TestClient(n1kv_client): @@ -65,6 +69,13 @@ self.inject_params = True +class TestClientInvalidResponse(TestClient): + + def __init__(self, **kwargs): + super(TestClientInvalidResponse, self).__init__() + self.broken = True + + def _validate_resource(action, body=None): if body: body_set = set(body.keys()) @@ -78,6 +89,10 @@ port_set = set(_resource_metadata['port']) if body_set - port_set: raise c_exc.VSMError(reason='Invalid Request') + elif 'subnet' in action: + subnet_set = set(_resource_metadata['subnet']) + if body_set - subnet_set: + raise c_exc.VSMError(reason='Invalid Request') else: return diff -Nru neutron-2014.1.2/neutron/tests/unit/cisco/n1kv/test_n1kv_plugin.py neutron-2014.1.3/neutron/tests/unit/cisco/n1kv/test_n1kv_plugin.py --- neutron-2014.1.2/neutron/tests/unit/cisco/n1kv/test_n1kv_plugin.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/cisco/n1kv/test_n1kv_plugin.py 2014-10-02 23:25:23.000000000 +0000 @@ -26,8 +26,10 @@ import neutron.db.api as db from neutron.extensions import portbindings from neutron import manager +from neutron.plugins.cisco.common import cisco_constants as c_const from neutron.plugins.cisco.common import cisco_exceptions as c_exc from neutron.plugins.cisco.db import n1kv_db_v2 +from neutron.plugins.cisco.db import n1kv_models_v2 from neutron.plugins.cisco.db import network_db_v2 as cdb from neutron.plugins.cisco import extensions from neutron.plugins.cisco.extensions import n1kv @@ -38,6 +40,8 @@ from neutron.tests.unit.cisco.n1kv import fake_client from neutron.tests.unit import test_api_v2 from neutron.tests.unit import test_db_plugin as test_plugin +from neutron.tests.unit import test_l3_plugin +from neutron.tests.unit import test_l3_schedulers PHYS_NET = 'some-phys-net' @@ -116,6 +120,7 @@ def _make_test_profile(self, name='default_network_profile', + segment_type=c_const.NETWORK_TYPE_VLAN, segment_range='386-400'): """ Create a profile record for testing purposes. @@ -123,17 +128,24 @@ :param name: string representing the name of the network profile to create. Default argument value chosen to correspond to the default name specified in config.py file. + :param segment_type: string representing the type of network segment. :param segment_range: string representing the segment range for network profile. """ db_session = db.get_session() profile = {'name': name, - 'segment_type': 'vlan', - 'physical_network': PHYS_NET, + 'segment_type': segment_type, 'tenant_id': self.tenant_id, 'segment_range': segment_range} - net_p = n1kv_db_v2.create_network_profile(db_session, profile) - n1kv_db_v2.sync_vlan_allocations(db_session, net_p) + if segment_type == c_const.NETWORK_TYPE_OVERLAY: + profile['sub_type'] = 'unicast' + profile['multicast_ip_range'] = '0.0.0.0' + net_p = n1kv_db_v2.create_network_profile(db_session, profile) + n1kv_db_v2.sync_vxlan_allocations(db_session, net_p) + elif segment_type == c_const.NETWORK_TYPE_VLAN: + profile['physical_network'] = PHYS_NET + net_p = n1kv_db_v2.create_network_profile(db_session, profile) + n1kv_db_v2.sync_vlan_allocations(db_session, net_p) return net_p def setUp(self): @@ -280,6 +292,13 @@ res = net_p_req.get_response(self.ext_api) self.assertEqual(res.status_int, 201) + def test_create_network_profile_overlay_missing_subtype(self): + data = self._prepare_net_profile_data(c_const.NETWORK_TYPE_OVERLAY) + data['network_profile'].pop('sub_type') + net_p_req = self.new_create_request('network_profiles', data) + res = net_p_req.get_response(self.ext_api) + self.assertEqual(res.status_int, 400) + def test_create_network_profile_trunk(self): data = self._prepare_net_profile_data('trunk') net_p_req = self.new_create_request('network_profiles', data) @@ -544,6 +563,18 @@ PHYS_NET, vlan) + def test_create_network_profile_rollback_profile_binding(self): + """Test rollback of profile binding if network profile create fails.""" + db_session = db.get_session() + client_patch = patch(n1kv_client.__name__ + ".Client", + new=fake_client.TestClientInvalidResponse) + client_patch.start() + net_p_dict = self._prepare_net_profile_data(c_const.NETWORK_TYPE_VLAN) + self.new_create_request('network_profiles', net_p_dict) + bindings = (db_session.query(n1kv_models_v2.ProfileBinding).filter_by( + profile_type="network")) + self.assertEqual(bindings.count(), 0) + class TestN1kvBasicGet(test_plugin.TestBasicGet, N1kvPluginTestCase): @@ -626,6 +657,51 @@ self.assertEqual(res.status_int, 500) client_patch.stop() + def test_create_first_port_rollback_vmnetwork(self): + """Test whether VMNetwork is cleaned up if port create fails on VSM.""" + db_session = db.get_session() + profile_obj = self._make_test_policy_profile(name='test_profile') + with self.network() as network: + client_patch = patch(n1kv_client.__name__ + ".Client", + new=fake_client.TestClientInvalidResponse) + client_patch.start() + data = {'port': {n1kv.PROFILE_ID: profile_obj.id, + 'tenant_id': self.tenant_id, + 'network_id': network['network']['id'], + }} + self.new_create_request('ports', data) + self.assertRaises(c_exc.VMNetworkNotFound, + n1kv_db_v2.get_vm_network, + db_session, + profile_obj.id, + network['network']['id']) + # Explicit stop of failure response mock from controller required + # for network object clean up to succeed. + client_patch.stop() + + def test_create_next_port_rollback_vmnetwork_count(self): + """Test whether VMNetwork count if port create fails on VSM.""" + db_session = db.get_session() + with self.port() as port: + pt = port['port'] + old_vmn = n1kv_db_v2.get_vm_network(db_session, + pt['n1kv:profile_id'], + pt['network_id']) + client_patch = patch(n1kv_client.__name__ + ".Client", + new=fake_client.TestClientInvalidResponse) + client_patch.start() + data = {'port': {n1kv.PROFILE_ID: pt['n1kv:profile_id'], + 'tenant_id': pt['tenant_id'], + 'network_id': pt['network_id']}} + self.new_create_request('ports', data) + new_vmn = n1kv_db_v2.get_vm_network(db_session, + pt['n1kv:profile_id'], + pt['network_id']) + self.assertEqual(old_vmn.port_count, new_vmn.port_count) + # Explicit stop of failure response mock from controller required + # for network object clean up to succeed. + client_patch.stop() + class TestN1kvPolicyProfiles(N1kvPluginTestCase): def test_populate_policy_profile(self): @@ -713,8 +789,57 @@ # Network update should fail to update network profile id. self.assertEqual(res.status_int, 400) + def test_create_network_rollback_deallocate_vlan_segment(self): + """Test vlan segment deallocation on network create failure.""" + profile_obj = self._make_test_profile(name='test_profile', + segment_range='20-23') + data = self._prepare_net_data(profile_obj.id) + client_patch = patch(n1kv_client.__name__ + ".Client", + new=fake_client.TestClientInvalidResponse) + client_patch.start() + self.new_create_request('networks', data) + db_session = db.get_session() + self.assertFalse(n1kv_db_v2.get_vlan_allocation(db_session, + PHYS_NET, + 20).allocated) + + def test_create_network_rollback_deallocate_overlay_segment(self): + """Test overlay segment deallocation on network create failure.""" + profile_obj = self._make_test_profile('test_np', + c_const.NETWORK_TYPE_OVERLAY, + '10000-10001') + data = self._prepare_net_data(profile_obj.id) + client_patch = patch(n1kv_client.__name__ + ".Client", + new=fake_client.TestClientInvalidResponse) + client_patch.start() + self.new_create_request('networks', data) + db_session = db.get_session() + self.assertFalse(n1kv_db_v2.get_vxlan_allocation(db_session, + 10000).allocated) + class TestN1kvSubnets(test_plugin.TestSubnetsV2, N1kvPluginTestCase): + def test_create_subnet_with_invalid_parameters(self): + """Test subnet creation with invalid parameters sent to the VSM.""" + with self.network() as network: + client_patch = patch(n1kv_client.__name__ + ".Client", + new=fake_client.TestClientInvalidRequest) + client_patch.start() + data = {'subnet': {'network_id': network['network']['id'], + 'cidr': "10.0.0.0/24"}} + subnet_req = self.new_create_request('subnets', data) + subnet_resp = subnet_req.get_response(self.api) + # Subnet creation should fail due to invalid network name + self.assertEqual(subnet_resp.status_int, 400) + + +class TestN1kvL3Test(test_l3_plugin.L3NatExtensionTestCase): + + pass + + +class TestN1kvL3SchedulersTest(test_l3_schedulers.L3SchedulerTestCase): + pass diff -Nru neutron-2014.1.2/neutron/tests/unit/db/firewall/test_db_firewall.py neutron-2014.1.3/neutron/tests/unit/db/firewall/test_db_firewall.py --- neutron-2014.1.2/neutron/tests/unit/db/firewall/test_db_firewall.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/db/firewall/test_db_firewall.py 2014-10-02 23:25:23.000000000 +0000 @@ -580,6 +580,20 @@ for k, v in attrs.iteritems(): self.assertEqual(firewall_rule['firewall_rule'][k], v) + def test_create_firewall_rule_without_protocol_with_dport(self): + attrs = self._get_test_firewall_rule_attrs() + attrs['protocol'] = None + attrs['source_port'] = None + res = self._create_firewall_rule(self.fmt, **attrs) + self.assertEqual(400, res.status_int) + + def test_create_firewall_rule_without_protocol_with_sport(self): + attrs = self._get_test_firewall_rule_attrs() + attrs['protocol'] = None + attrs['destination_port'] = None + res = self._create_firewall_rule(self.fmt, **attrs) + self.assertEqual(400, res.status_int) + def test_show_firewall_rule_with_fw_policy_not_associated(self): attrs = self._get_test_firewall_rule_attrs() with self.firewall_rule() as fw_rule: @@ -675,6 +689,46 @@ for k, v in attrs.iteritems(): self.assertEqual(res['firewall_rule'][k], v) + def test_update_firewall_rule_with_port_and_no_proto(self): + with self.firewall_rule() as fwr: + data = {'firewall_rule': {'protocol': None, + 'destination_port': 80}} + req = self.new_update_request('firewall_rules', data, + fwr['firewall_rule']['id']) + res = req.get_response(self.ext_api) + self.assertEqual(400, res.status_int) + + def test_update_firewall_rule_without_ports_and_no_proto(self): + with self.firewall_rule() as fwr: + data = {'firewall_rule': {'protocol': None, + 'destination_port': None, + 'source_port': None}} + req = self.new_update_request('firewall_rules', data, + fwr['firewall_rule']['id']) + res = req.get_response(self.ext_api) + self.assertEqual(200, res.status_int) + + def test_update_firewall_rule_with_port(self): + with self.firewall_rule(source_port=None, + destination_port=None, + protocol=None) as fwr: + data = {'firewall_rule': {'destination_port': 80}} + req = self.new_update_request('firewall_rules', data, + fwr['firewall_rule']['id']) + res = req.get_response(self.ext_api) + self.assertEqual(400, res.status_int) + + def test_update_firewall_rule_with_port_and_protocol(self): + with self.firewall_rule(source_port=None, + destination_port=None, + protocol=None) as fwr: + data = {'firewall_rule': {'destination_port': 80, + 'protocol': 'tcp'}} + req = self.new_update_request('firewall_rules', data, + fwr['firewall_rule']['id']) + res = req.get_response(self.ext_api) + self.assertEqual(200, res.status_int) + def test_update_firewall_rule_with_policy_associated(self): name = "new_firewall_rule1" attrs = self._get_test_firewall_rule_attrs(name) diff -Nru neutron-2014.1.2/neutron/tests/unit/ml2/drivers/test_bigswitch_mech.py neutron-2014.1.3/neutron/tests/unit/ml2/drivers/test_bigswitch_mech.py --- neutron-2014.1.2/neutron/tests/unit/ml2/drivers/test_bigswitch_mech.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/ml2/drivers/test_bigswitch_mech.py 2014-10-02 23:25:23.000000000 +0000 @@ -31,7 +31,8 @@ PHYS_NET = 'physnet1' VLAN_START = 1000 VLAN_END = 1100 -SERVER_POOL = 'neutron.plugins.bigswitch.servermanager.ServerPool' +SERVER_MANAGER = 'neutron.plugins.bigswitch.servermanager' +SERVER_POOL = SERVER_MANAGER + '.ServerPool' DRIVER_MOD = 'neutron.plugins.ml2.drivers.mech_bigswitch.driver' DRIVER = DRIVER_MOD + '.BigSwitchMechanismDriver' @@ -120,3 +121,11 @@ self.assertEqual('host', pb['binding:host_id']) self.assertIn('bound_segment', pb) self.assertIn('network', pb) + + def test_req_context_header_present(self): + with nested( + mock.patch(SERVER_MANAGER + '.ServerProxy.rest_call'), + self.port(**{'device_id': 'devid', 'binding:host_id': 'host'}) + ) as (mock_rest, p): + headers = mock_rest.mock_calls[0][1][3] + self.assertIn('X-REQ-CONTEXT', headers) diff -Nru neutron-2014.1.2/neutron/tests/unit/ml2/test_mechanism_odl.py neutron-2014.1.3/neutron/tests/unit/ml2/test_mechanism_odl.py --- neutron-2014.1.2/neutron/tests/unit/ml2/test_mechanism_odl.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/ml2/test_mechanism_odl.py 2014-10-02 23:25:23.000000000 +0000 @@ -14,10 +14,14 @@ # under the License. # @author: Kyle Mestery, Cisco Systems, Inc. +import mock +import requests + from neutron.plugins.common import constants from neutron.plugins.ml2 import config as config from neutron.plugins.ml2 import driver_api as api from neutron.plugins.ml2.drivers import mechanism_odl +from neutron.tests import base from neutron.tests.unit import test_db_plugin as test_plugin PLUGIN_NAME = 'neutron.plugins.ml2.plugin.Ml2Plugin' @@ -77,3 +81,93 @@ class OpenDaylightMechanismTestPortsV2(test_plugin.TestPortsV2, OpenDaylightTestCase): pass + + +class AuthMatcher(object): + def __eq__(self, obj): + return (obj.username == config.cfg.CONF.ml2_odl.username and + obj.password == config.cfg.CONF.ml2_odl.password) + + +class OpenDaylightMechanismDriverTestCase(base.BaseTestCase): + + def setUp(self): + super(OpenDaylightMechanismDriverTestCase, self).setUp() + config.cfg.CONF.set_override('mechanism_drivers', + ['logger', 'opendaylight'], 'ml2') + config.cfg.CONF.set_override('url', 'http://127.0.0.1:9999', 'ml2_odl') + config.cfg.CONF.set_override('username', 'someuser', 'ml2_odl') + config.cfg.CONF.set_override('password', 'somepass', 'ml2_odl') + self.mech = mechanism_odl.OpenDaylightMechanismDriver() + self.mech.initialize() + + @staticmethod + def _get_mock_delete_resource_context(): + current = {'id': '00000000-1111-2222-3333-444444444444'} + context = mock.Mock(current=current) + return context + + _status_code_msgs = { + 204: '', + 401: '401 Client Error: Unauthorized', + 403: '403 Client Error: Forbidden', + 404: '404 Client Error: Not Found', + 409: '409 Client Error: Conflict', + 501: '501 Server Error: Not Implemented' + } + + @classmethod + def _get_mock_request_response(cls, status_code): + response = mock.Mock(status_code=status_code) + response.raise_for_status = mock.Mock() if status_code < 400 else ( + mock.Mock(side_effect=requests.exceptions.HTTPError( + cls._status_code_msgs[status_code]))) + return response + + def _test_delete_resource_postcommit(self, object_type, status_code, + exc_class=None): + self.mech.out_of_sync = False + method = getattr(self.mech, 'delete_%s_postcommit' % object_type) + context = self._get_mock_delete_resource_context() + request_response = self._get_mock_request_response(status_code) + with mock.patch('requests.request', + return_value=request_response) as mock_method: + if exc_class is not None: + self.assertRaises(exc_class, method, context) + else: + method(context) + url = '%s/%ss/%s' % (config.cfg.CONF.ml2_odl.url, object_type, + context.current['id']) + mock_method.assert_called_once_with( + 'delete', url=url, headers={'Content-Type': 'application/json'}, + data=None, auth=AuthMatcher(), + timeout=config.cfg.CONF.ml2_odl.timeout) + + def test_delete_network_postcommit(self): + self._test_delete_resource_postcommit('network', + requests.codes.no_content) + for status_code in (requests.codes.unauthorized, + requests.codes.not_found, + requests.codes.conflict): + self._test_delete_resource_postcommit( + 'network', status_code, requests.exceptions.HTTPError) + + def test_delete_subnet_postcommit(self): + self._test_delete_resource_postcommit('subnet', + requests.codes.no_content) + for status_code in (requests.codes.unauthorized, + requests.codes.not_found, + requests.codes.conflict, + requests.codes.not_implemented): + self._test_delete_resource_postcommit( + 'subnet', status_code, requests.exceptions.HTTPError) + + def test_delete_port_postcommit(self): + self._test_delete_resource_postcommit('port', + requests.codes.no_content) + for status_code in (requests.codes.unauthorized, + requests.codes.forbidden, + requests.codes.not_found, + requests.codes.not_implemented): + self._test_delete_resource_postcommit( + 'port', status_code, requests.exceptions.HTTPError) diff -Nru neutron-2014.1.2/neutron/tests/unit/ml2/test_ml2_plugin.py neutron-2014.1.3/neutron/tests/unit/ml2/test_ml2_plugin.py --- neutron-2014.1.2/neutron/tests/unit/ml2/test_ml2_plugin.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/ml2/test_ml2_plugin.py 2014-10-02 23:25:23.000000000 +0000 @@ -27,6 +27,7 @@ from neutron.plugins.common import constants as service_constants from neutron.plugins.ml2.common import exceptions as ml2_exc from neutron.plugins.ml2 import config +from neutron.plugins.ml2 import driver_api from neutron.plugins.ml2 import plugin as ml2_plugin from neutron.tests.unit import _test_extension_portbindings as test_bindings from neutron.tests.unit import test_db_plugin as test_plugin @@ -326,6 +327,17 @@ res = network_req.get_response(self.api) self.assertEqual(res.status_int, 400) + def test_release_segment_no_type_driver(self): + segment = {driver_api.NETWORK_TYPE: 'faketype', + driver_api.PHYSICAL_NETWORK: 'physnet1', + driver_api.ID: 1} + with mock.patch('neutron.plugins.ml2.managers.LOG') as log: + self.driver.type_manager.release_segment(session=None, + segment=segment) + log.error.assert_called_once_with( + "Failed to release segment '%s' because " + "network type is not supported.", segment) + def test_create_provider_fail(self): segment = {pnet.NETWORK_TYPE: None, pnet.PHYSICAL_NETWORK: 'phys_net', diff -Nru neutron-2014.1.2/neutron/tests/unit/openvswitch/test_openvswitch_plugin.py neutron-2014.1.3/neutron/tests/unit/openvswitch/test_openvswitch_plugin.py --- neutron-2014.1.2/neutron/tests/unit/openvswitch/test_openvswitch_plugin.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/openvswitch/test_openvswitch_plugin.py 2014-10-02 23:25:23.000000000 +0000 @@ -15,12 +15,17 @@ from oslo.config import cfg +from neutron import context from neutron.extensions import portbindings +from neutron.extensions import securitygroup as ext_sg +from neutron.plugins.openvswitch import ovs_neutron_plugin from neutron.tests.unit import _test_extension_portbindings as test_bindings from neutron.tests.unit import test_db_plugin as test_plugin from neutron.tests.unit import test_extension_allowedaddresspairs as test_pair from neutron.tests.unit import test_security_groups_rpc as test_sg_rpc +import mock + class OpenvswitchPluginV2TestCase(test_plugin.NeutronDbPluginV2TestCase): @@ -86,3 +91,69 @@ class TestOpenvswitchAllowedAddressPairs(OpenvswitchPluginV2TestCase, test_pair.TestAllowedAddressPairs): pass + + +class TestOpenvswitchUpdatePort(OpenvswitchPluginV2TestCase, + ovs_neutron_plugin.OVSNeutronPluginV2): + + def test_update_port_add_remove_security_group(self): + get_port_func = ( + 'neutron.db.db_base_plugin_v2.' + 'NeutronDbPluginV2.get_port' + ) + with mock.patch(get_port_func) as mock_get_port: + mock_get_port.return_value = { + ext_sg.SECURITYGROUPS: ["sg1", "sg2"], + "admin_state_up": True, + "fixed_ips": "fake_ip", + "network_id": "fake_id"} + + update_port_func = ( + 'neutron.db.db_base_plugin_v2.' + 'NeutronDbPluginV2.update_port' + ) + with mock.patch(update_port_func) as mock_update_port: + mock_update_port.return_value = { + ext_sg.SECURITYGROUPS: ["sg2", "sg3"], + "admin_state_up": True, + "fixed_ips": "fake_ip", + "network_id": "fake_id"} + + fake_func = ( + 'neutron.plugins.openvswitch.' + 'ovs_db_v2.get_network_binding' + ) + with mock.patch(fake_func) as mock_func: + class MockBinding: + network_type = "fake" + segmentation_id = "fake" + physical_network = "fake" + + mock_func.return_value = MockBinding() + + ctx = context.Context('', 'somebody') + self.update_port(ctx, "id", { + "port": { + ext_sg.SECURITYGROUPS: [ + "sg2", "sg3"]}}) + + sgmu = self.notifier.security_groups_member_updated + sgmu.assert_called_with(ctx, set(['sg1', 'sg3'])) + + def setUp(self): + super(TestOpenvswitchUpdatePort, self).setUp() + self.update_security_group_on_port = mock.MagicMock(return_value=True) + self._process_portbindings_create_and_update = mock.MagicMock( + return_value=True) + self._update_extra_dhcp_opts_on_port = mock.MagicMock( + return_value=True) + self.update_address_pairs_on_port = mock.MagicMock( + return_value=True) + + class MockNotifier: + def __init__(self): + self.port_update = mock.MagicMock(return_value=True) + self.security_groups_member_updated = mock.MagicMock( + return_value=True) + + self.notifier = MockNotifier() diff -Nru neutron-2014.1.2/neutron/tests/unit/services/metering/drivers/test_iptables_driver.py neutron-2014.1.3/neutron/tests/unit/services/metering/drivers/test_iptables_driver.py --- neutron-2014.1.2/neutron/tests/unit/services/metering/drivers/test_iptables_driver.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/services/metering/drivers/test_iptables_driver.py 2014-10-02 23:25:23.000000000 +0000 @@ -68,7 +68,8 @@ self.iptables_cls.assert_called_with(root_helper='fake_sudo', namespace=mock.ANY, - binary_name=mock.ANY) + binary_name=mock.ANY, + use_ipv6=mock.ANY) def test_add_metering_label(self): routers = [{'_metering_labels': [ diff -Nru neutron-2014.1.2/neutron/tests/unit/test_api_v2.py neutron-2014.1.3/neutron/tests/unit/test_api_v2.py --- neutron-2014.1.2/neutron/tests/unit/test_api_v2.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_api_v2.py 2014-10-02 23:25:23.000000000 +0000 @@ -1208,6 +1208,18 @@ network_id='id1', dummy=body) + def test_update_subresource_to_none(self): + instance = self.plugin.return_value + + dummy_id = _uuid() + body = {'dummy': {}} + self.api.put_json('/networks/id1' + _get_path('dummies', id=dummy_id), + body) + instance.update_network_dummy.assert_called_once_with(mock.ANY, + dummy_id, + network_id='id1', + dummy=body) + def test_delete_sub_resource(self): instance = self.plugin.return_value diff -Nru neutron-2014.1.2/neutron/tests/unit/test_db_plugin_level.py neutron-2014.1.3/neutron/tests/unit/test_db_plugin_level.py --- neutron-2014.1.2/neutron/tests/unit/test_db_plugin_level.py 1970-01-01 00:00:00.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_db_plugin_level.py 2014-10-02 23:25:23.000000000 +0000 @@ -0,0 +1,83 @@ +# Copyright (c) 2014 Red Hat, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from neutron.api.v2 import attributes +from neutron.common import constants +from neutron.common import exceptions as n_exc +from neutron import context +from neutron import manager +from neutron.tests import base +from neutron.tests.unit import test_db_plugin +from neutron.tests.unit import testlib_api + + +class TestNetworks(base.BaseTestCase): + def setUp(self): + super(TestNetworks, self).setUp() + self._tenant_id = 'test-tenant' + + # Update the plugin + self.setup_coreplugin(test_db_plugin.DB_PLUGIN_KLASS) + + def _create_network(self, plugin, ctx, shared=True): + network = {'network': {'name': 'net', + 'shared': shared, + 'admin_state_up': True, + 'tenant_id': self._tenant_id}} + created_network = plugin.create_network(ctx, network) + return (network, created_network['id']) + + def _create_port(self, plugin, ctx, net_id, device_owner, tenant_id): + port = {'port': {'name': 'port', + 'network_id': net_id, + 'mac_address': attributes.ATTR_NOT_SPECIFIED, + 'fixed_ips': attributes.ATTR_NOT_SPECIFIED, + 'admin_state_up': True, + 'device_id': 'device_id', + 'device_owner': device_owner, + 'tenant_id': tenant_id}} + plugin.create_port(ctx, port) + + def _test_update_shared_net_used(self, + device_owner, + expected_exception=None): + plugin = manager.NeutronManager.get_plugin() + ctx = context.get_admin_context() + network, net_id = self._create_network(plugin, ctx) + + self._create_port(plugin, + ctx, + net_id, + device_owner, + self._tenant_id + '1') + + network['network']['shared'] = False + + if (expected_exception): + with testlib_api.ExpectedException(expected_exception): + plugin.update_network(ctx, net_id, network) + else: + plugin.update_network(ctx, net_id, network) + + def test_update_shared_net_used_fails(self): + self._test_update_shared_net_used('', n_exc.InvalidSharedSetting) + + def test_update_shared_net_used_as_router_gateway(self): + self._test_update_shared_net_used( + constants.DEVICE_OWNER_ROUTER_GW) + + def test_update_shared_net_used_by_floating_ip(self): + self._test_update_shared_net_used( + constants.DEVICE_OWNER_FLOATINGIP) diff -Nru neutron-2014.1.2/neutron/tests/unit/test_dhcp_agent.py neutron-2014.1.3/neutron/tests/unit/test_dhcp_agent.py --- neutron-2014.1.2/neutron/tests/unit/test_dhcp_agent.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_dhcp_agent.py 2014-10-02 23:25:23.000000000 +0000 @@ -88,12 +88,14 @@ fake_port1 = dhcp.DictModel(dict(id='12345678-1234-aaaa-1234567890ab', device_id='dhcp-12345678-1234-aaaa-1234567890ab', + device_owner='', allocation_pools=fake_subnet1_allocation_pools, mac_address='aa:bb:cc:dd:ee:ff', network_id='12345678-1234-5678-1234567890ab', fixed_ips=[fake_fixed_ip1])) fake_port2 = dhcp.DictModel(dict(id='12345678-1234-aaaa-123456789000', + device_owner='', mac_address='aa:bb:cc:dd:ee:99', network_id='12345678-1234-5678-1234567890ab', fixed_ips=[])) @@ -111,6 +113,22 @@ subnets=[fake_subnet1, fake_subnet2], ports=[fake_port1])) +isolated_network = dhcp.NetModel( + True, dict( + id='12345678-1234-5678-1234567890ab', + tenant_id='aaaaaaaa-aaaa-aaaa-aaaaaaaaaaaa', + admin_state_up=True, + subnets=[fake_subnet1], + ports=[fake_port1])) + +empty_network = dhcp.NetModel( + True, dict( + id='12345678-1234-5678-1234567890ab', + tenant_id='aaaaaaaa-aaaa-aaaa-aaaaaaaaaaaa', + admin_state_up=True, + subnets=[fake_subnet1], + ports=[])) + fake_meta_network = dhcp.NetModel( True, dict(id='12345678-1234-5678-1234567890ab', tenant_id='aaaaaaaa-aaaa-aaaa-aaaaaaaaaaaa', @@ -497,16 +515,17 @@ self.mock_init_p.stop() super(TestDhcpAgentEventHandler, self).tearDown() - def _enable_dhcp_helper(self, isolated_metadata=False): - if isolated_metadata: + def _enable_dhcp_helper(self, network, enable_isolated_metadata=False, + is_isolated_network=False): + if enable_isolated_metadata: cfg.CONF.set_override('enable_isolated_metadata', True) - self.plugin.get_network_info.return_value = fake_network - self.dhcp.enable_dhcp_helper(fake_network.id) + self.plugin.get_network_info.return_value = network + self.dhcp.enable_dhcp_helper(network.id) self.plugin.assert_has_calls( - [mock.call.get_network_info(fake_network.id)]) - self.call_driver.assert_called_once_with('enable', fake_network) - self.cache.assert_has_calls([mock.call.put(fake_network)]) - if isolated_metadata: + [mock.call.get_network_info(network.id)]) + self.call_driver.assert_called_once_with('enable', network) + self.cache.assert_has_calls([mock.call.put(network)]) + if is_isolated_network: self.external_process.assert_has_calls([ mock.call( cfg.CONF, @@ -518,11 +537,35 @@ else: self.assertFalse(self.external_process.call_count) - def test_enable_dhcp_helper_enable_isolated_metadata(self): - self._enable_dhcp_helper(isolated_metadata=True) + def test_enable_dhcp_helper_enable_metadata_isolated_network(self): + self._enable_dhcp_helper(isolated_network, + enable_isolated_metadata=True, + is_isolated_network=True) + + def test_enable_dhcp_helper_enable_metadata_no_gateway(self): + isolated_network_no_gateway = copy.deepcopy(isolated_network) + isolated_network_no_gateway.subnets[0].gateway_ip = None + + self._enable_dhcp_helper(isolated_network_no_gateway, + enable_isolated_metadata=True, + is_isolated_network=True) + + def test_enable_dhcp_helper_enable_metadata_nonisolated_network(self): + nonisolated_network = copy.deepcopy(isolated_network) + nonisolated_network.ports[0].device_owner = "network:router_interface" + nonisolated_network.ports[0].fixed_ips[0].ip_address = '172.9.9.1' + + self._enable_dhcp_helper(nonisolated_network, + enable_isolated_metadata=True, + is_isolated_network=False) + + def test_enable_dhcp_helper_enable_metadata_empty_network(self): + self._enable_dhcp_helper(empty_network, + enable_isolated_metadata=True, + is_isolated_network=True) def test_enable_dhcp_helper(self): - self._enable_dhcp_helper() + self._enable_dhcp_helper(fake_network) def test_enable_dhcp_helper_down_network(self): self.plugin.get_network_info.return_value = fake_down_network diff -Nru neutron-2014.1.2/neutron/tests/unit/test_extension_security_group.py neutron-2014.1.3/neutron/tests/unit/test_extension_security_group.py --- neutron-2014.1.2/neutron/tests/unit/test_extension_security_group.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_extension_security_group.py 2014-10-02 23:25:23.000000000 +0000 @@ -1421,5 +1421,15 @@ self.assertEqual(ext_sg.convert_ip_prefix_to_cidr(addr), addr) +class TestConvertProtocol(base.BaseTestCase): + def test_convert_numeric_protocol(self): + assert(isinstance(ext_sg.convert_protocol('2'), str)) + + def test_convert_bad_protocol(self): + for val in ['bad', '256', '-1']: + self.assertRaises(ext_sg.SecurityGroupRuleInvalidProtocol, + ext_sg.convert_protocol, val) + + class TestSecurityGroupsXML(TestSecurityGroups): fmt = 'xml' diff -Nru neutron-2014.1.2/neutron/tests/unit/test_iptables_manager.py neutron-2014.1.3/neutron/tests/unit/test_iptables_manager.py --- neutron-2014.1.2/neutron/tests/unit/test_iptables_manager.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_iptables_manager.py 2014-10-02 23:25:23.000000000 +0000 @@ -69,8 +69,8 @@ def setUp(self): super(IptablesManagerStateFulTestCase, self).setUp() self.root_helper = 'sudo' - self.iptables = (iptables_manager. - IptablesManager(root_helper=self.root_helper)) + self.iptables = iptables_manager.IptablesManager( + root_helper=self.root_helper) self.execute = mock.patch.object(self.iptables, "execute").start() def test_binary_name(self): @@ -87,12 +87,32 @@ self.assertEqual(iptables_manager.get_chain_name(name, wrap=True), name[:11]) - def test_add_and_remove_chain_custom_binary_name(self): + def _extend_with_ip6tables_filter(self, expected_calls, filter_dump): + expected_calls.insert(2, ( + mock.call(['ip6tables-save', '-c'], + root_helper=self.root_helper), + '')) + expected_calls.insert(3, ( + mock.call(['ip6tables-restore', '-c'], + process_input=filter_dump, + root_helper=self.root_helper), + None)) + expected_calls.extend([ + (mock.call(['ip6tables-save', '-c'], + root_helper=self.root_helper), + ''), + (mock.call(['ip6tables-restore', '-c'], + process_input=filter_dump, + root_helper=self.root_helper), + None)]) + + def _test_add_and_remove_chain_custom_binary_name_helper(self, use_ipv6): bn = ("abcdef" * 5) - self.iptables = (iptables_manager. - IptablesManager(root_helper=self.root_helper, - binary_name=bn)) + self.iptables = iptables_manager.IptablesManager( + root_helper=self.root_helper, + binary_name=bn, + use_ipv6=use_ipv6) self.execute = mock.patch.object(self.iptables, "execute").start() iptables_args = {'bn': bn[:16]} @@ -114,6 +134,23 @@ 'COMMIT\n' '# Completed by iptables_manager\n' % iptables_args) + filter_dump_ipv6 = ('# Generated by iptables_manager\n' + '*filter\n' + ':neutron-filter-top - [0:0]\n' + ':%(bn)s-FORWARD - [0:0]\n' + ':%(bn)s-INPUT - [0:0]\n' + ':%(bn)s-local - [0:0]\n' + ':%(bn)s-OUTPUT - [0:0]\n' + '[0:0] -A FORWARD -j neutron-filter-top\n' + '[0:0] -A OUTPUT -j neutron-filter-top\n' + '[0:0] -A neutron-filter-top -j %(bn)s-local\n' + '[0:0] -A INPUT -j %(bn)s-INPUT\n' + '[0:0] -A OUTPUT -j %(bn)s-OUTPUT\n' + '[0:0] -A FORWARD -j %(bn)s-FORWARD\n' + 'COMMIT\n' + '# Completed by iptables_manager\n' % + iptables_args) + filter_dump_mod = ('# Generated by iptables_manager\n' '*filter\n' ':neutron-filter-top - [0:0]\n' @@ -166,6 +203,10 @@ root_helper=self.root_helper), None), ] + if use_ipv6: + self._extend_with_ip6tables_filter(expected_calls_and_values, + filter_dump_ipv6) + tools.setup_mock_calls(self.execute, expected_calls_and_values) self.iptables.ipv4['filter'].add_chain('filter') @@ -176,12 +217,19 @@ tools.verify_mock_calls(self.execute, expected_calls_and_values) - def test_empty_chain_custom_binary_name(self): + def test_add_and_remove_chain_custom_binary_name(self): + self._test_add_and_remove_chain_custom_binary_name_helper(False) + + def test_add_and_remove_chain_custom_binary_name_with_ipv6(self): + self._test_add_and_remove_chain_custom_binary_name_helper(True) + + def _test_empty_chain_custom_binary_name_helper(self, use_ipv6): bn = ("abcdef" * 5)[:16] - self.iptables = (iptables_manager. - IptablesManager(root_helper=self.root_helper, - binary_name=bn)) + self.iptables = iptables_manager.IptablesManager( + root_helper=self.root_helper, + binary_name=bn, + use_ipv6=use_ipv6) self.execute = mock.patch.object(self.iptables, "execute").start() iptables_args = {'bn': bn} @@ -255,6 +303,10 @@ root_helper=self.root_helper), None), ] + if use_ipv6: + self._extend_with_ip6tables_filter(expected_calls_and_values, + filter_dump) + tools.setup_mock_calls(self.execute, expected_calls_and_values) self.iptables.ipv4['filter'].add_chain('filter') @@ -267,7 +319,18 @@ tools.verify_mock_calls(self.execute, expected_calls_and_values) - def test_add_and_remove_chain(self): + def test_empty_chain_custom_binary_name(self): + self._test_empty_chain_custom_binary_name_helper(False) + + def test_empty_chain_custom_binary_name_with_ipv6(self): + self._test_empty_chain_custom_binary_name_helper(True) + + def _test_add_and_remove_chain_helper(self, use_ipv6): + self.iptables = iptables_manager.IptablesManager( + root_helper=self.root_helper, + use_ipv6=use_ipv6) + self.execute = mock.patch.object(self.iptables, "execute").start() + filter_dump_mod = ('# Generated by iptables_manager\n' '*filter\n' ':neutron-filter-top - [0:0]\n' @@ -302,6 +365,10 @@ root_helper=self.root_helper), None), ] + if use_ipv6: + self._extend_with_ip6tables_filter(expected_calls_and_values, + FILTER_DUMP) + tools.setup_mock_calls(self.execute, expected_calls_and_values) self.iptables.ipv4['filter'].add_chain('filter') @@ -312,7 +379,18 @@ tools.verify_mock_calls(self.execute, expected_calls_and_values) - def test_add_filter_rule(self): + def test_add_and_remove_chain(self): + self._test_add_and_remove_chain_helper(False) + + def test_add_and_remove_chain_with_ipv6(self): + self._test_add_and_remove_chain_helper(True) + + def _test_add_filter_rule_helper(self, use_ipv6): + self.iptables = iptables_manager.IptablesManager( + root_helper=self.root_helper, + use_ipv6=use_ipv6) + self.execute = mock.patch.object(self.iptables, "execute").start() + filter_dump_mod = ('# Generated by iptables_manager\n' '*filter\n' ':neutron-filter-top - [0:0]\n' @@ -351,6 +429,10 @@ ), None), ] + if use_ipv6: + self._extend_with_ip6tables_filter(expected_calls_and_values, + FILTER_DUMP) + tools.setup_mock_calls(self.execute, expected_calls_and_values) self.iptables.ipv4['filter'].add_chain('filter') @@ -371,7 +453,18 @@ tools.verify_mock_calls(self.execute, expected_calls_and_values) - def test_rule_with_wrap_target(self): + def test_add_filter_rule(self): + self._test_add_filter_rule_helper(False) + + def test_add_filter_rule_with_ipv6(self): + self._test_add_filter_rule_helper(True) + + def _test_rule_with_wrap_target_helper(self, use_ipv6): + self.iptables = iptables_manager.IptablesManager( + root_helper=self.root_helper, + use_ipv6=use_ipv6) + self.execute = mock.patch.object(self.iptables, "execute").start() + name = '0123456789' * 5 wrap = "%s-%s" % (iptables_manager.binary_name, iptables_manager.get_chain_name(name)) @@ -415,6 +508,10 @@ root_helper=self.root_helper), None), ] + if use_ipv6: + self._extend_with_ip6tables_filter(expected_calls_and_values, + FILTER_DUMP) + tools.setup_mock_calls(self.execute, expected_calls_and_values) self.iptables.ipv4['filter'].add_chain(name) @@ -432,7 +529,18 @@ tools.verify_mock_calls(self.execute, expected_calls_and_values) - def test_add_nat_rule(self): + def test_rule_with_wrap_target(self): + self._test_rule_with_wrap_target_helper(False) + + def test_rule_with_wrap_target_with_ipv6(self): + self._test_rule_with_wrap_target_helper(True) + + def _test_add_nat_rule_helper(self, use_ipv6): + self.iptables = iptables_manager.IptablesManager( + root_helper=self.root_helper, + use_ipv6=use_ipv6) + self.execute = mock.patch.object(self.iptables, "execute").start() + nat_dump = ('# Generated by iptables_manager\n' '*nat\n' ':neutron-postrouting-bottom - [0:0]\n' @@ -490,6 +598,10 @@ root_helper=self.root_helper), None), ] + if use_ipv6: + self._extend_with_ip6tables_filter(expected_calls_and_values, + FILTER_DUMP) + tools.setup_mock_calls(self.execute, expected_calls_and_values) self.iptables.ipv4['nat'].add_chain('nat') @@ -514,6 +626,12 @@ tools.verify_mock_calls(self.execute, expected_calls_and_values) + def test_add_nat_rule(self): + self._test_add_nat_rule_helper(False) + + def test_add_nat_rule_with_ipv6(self): + self._test_add_nat_rule_helper(True) + def test_add_rule_to_a_nonexistent_chain(self): self.assertRaises(LookupError, self.iptables.ipv4['filter'].add_rule, 'nonexistent', '-j DROP') @@ -543,7 +661,14 @@ 'Attempted to get traffic counters of chain %s which ' 'does not exist', 'chain1') - def test_get_traffic_counters(self): + def _test_get_traffic_counters_helper(self, use_ipv6): + self.iptables = iptables_manager.IptablesManager( + root_helper=self.root_helper, + use_ipv6=use_ipv6) + self.execute = mock.patch.object(self.iptables, "execute").start() + exp_packets = 800 + exp_bytes = 131802 + iptables_dump = ( 'Chain OUTPUT (policy ACCEPT 400 packets, 65901 bytes)\n' ' pkts bytes target prot opt in out source' @@ -562,20 +687,38 @@ '-v', '-x'], root_helper=self.root_helper), ''), - (mock.call(['ip6tables', '-t', 'filter', '-L', 'OUTPUT', - '-n', '-v', '-x'], - root_helper=self.root_helper), - iptables_dump), ] + if use_ipv6: + expected_calls_and_values.append( + (mock.call(['ip6tables', '-t', 'filter', '-L', 'OUTPUT', + '-n', '-v', '-x'], + root_helper=self.root_helper), + iptables_dump)) + exp_packets *= 2 + exp_bytes *= 2 + tools.setup_mock_calls(self.execute, expected_calls_and_values) acc = self.iptables.get_traffic_counters('OUTPUT') - self.assertEqual(acc['pkts'], 1600) - self.assertEqual(acc['bytes'], 263604) + self.assertEqual(acc['pkts'], exp_packets) + self.assertEqual(acc['bytes'], exp_bytes) tools.verify_mock_calls(self.execute, expected_calls_and_values) - def test_get_traffic_counters_with_zero(self): + def test_get_traffic_counters(self): + self._test_get_traffic_counters_helper(False) + + def test_get_traffic_counters_with_ipv6(self): + self._test_get_traffic_counters_helper(True) + + def _test_get_traffic_counters_with_zero_helper(self, use_ipv6): + self.iptables = iptables_manager.IptablesManager( + root_helper=self.root_helper, + use_ipv6=use_ipv6) + self.execute = mock.patch.object(self.iptables, "execute").start() + exp_packets = 800 + exp_bytes = 131802 + iptables_dump = ( 'Chain OUTPUT (policy ACCEPT 400 packets, 65901 bytes)\n' ' pkts bytes target prot opt in out source' @@ -593,20 +736,31 @@ (mock.call(['iptables', '-t', 'nat', '-L', 'OUTPUT', '-n', '-v', '-x', '-Z'], root_helper=self.root_helper), - ''), - (mock.call(['ip6tables', '-t', 'filter', '-L', 'OUTPUT', - '-n', '-v', '-x', '-Z'], - root_helper=self.root_helper), - iptables_dump), + '') ] + if use_ipv6: + expected_calls_and_values.append( + (mock.call(['ip6tables', '-t', 'filter', '-L', 'OUTPUT', + '-n', '-v', '-x', '-Z'], + root_helper=self.root_helper), + iptables_dump)) + exp_packets *= 2 + exp_bytes *= 2 + tools.setup_mock_calls(self.execute, expected_calls_and_values) acc = self.iptables.get_traffic_counters('OUTPUT', zero=True) - self.assertEqual(acc['pkts'], 1600) - self.assertEqual(acc['bytes'], 263604) + self.assertEqual(acc['pkts'], exp_packets) + self.assertEqual(acc['bytes'], exp_bytes) tools.verify_mock_calls(self.execute, expected_calls_and_values) + def test_get_traffic_counters_with_zero(self): + self._test_get_traffic_counters_with_zero_helper(False) + + def test_get_traffic_counters_with_zero_with_ipv6(self): + self._test_get_traffic_counters_with_zero_helper(True) + def _test_find_last_entry(self, find_str): filter_list = [':neutron-filter-top - [0:0]', ':%(bn)s-FORWARD - [0:0]', diff -Nru neutron-2014.1.2/neutron/tests/unit/test_ipv6.py neutron-2014.1.3/neutron/tests/unit/test_ipv6.py --- neutron-2014.1.2/neutron/tests/unit/test_ipv6.py 2014-08-07 22:55:55.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_ipv6.py 2014-10-02 23:25:23.000000000 +0000 @@ -13,6 +13,8 @@ # License for the specific language governing permissions and limitations # under the License. +import mock + from neutron.common import ipv6_utils from neutron.tests import base @@ -48,3 +50,29 @@ prefix = 123 self.assertRaises(TypeError, lambda: ipv6_utils.get_ipv6_addr_by_EUI64(prefix, mac)) + + +class TestIsEnabled(base.BaseTestCase): + + def setUp(self): + super(TestIsEnabled, self).setUp() + ipv6_utils._IS_IPV6_ENABLED = None + mock_open = mock.patch("__builtin__.open").start() + self.mock_read = mock_open.return_value.__enter__.return_value.read + + def test_enabled(self): + self.mock_read.return_value = "0" + enabled = ipv6_utils.is_enabled() + self.assertTrue(enabled) + + def test_disabled(self): + self.mock_read.return_value = "1" + enabled = ipv6_utils.is_enabled() + self.assertFalse(enabled) + + def test_memoize(self): + self.mock_read.return_value = "0" + ipv6_utils.is_enabled() + enabled = ipv6_utils.is_enabled() + self.assertTrue(enabled) + self.mock_read.assert_called_once_with() diff -Nru neutron-2014.1.2/neutron/tests/unit/test_l3_agent.py neutron-2014.1.3/neutron/tests/unit/test_l3_agent.py --- neutron-2014.1.2/neutron/tests/unit/test_l3_agent.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_l3_agent.py 2014-10-02 23:25:23.000000000 +0000 @@ -495,6 +495,10 @@ ri, {'id': _uuid()}) self.assertEqual({}, fip_statuses) device.addr.delete.assert_called_once_with(4, '15.1.2.3/32') + self.mock_driver.delete_conntrack_state.assert_called_once_with( + root_helper=self.conf.root_helper, + namespace=ri.ns_name, + ip='15.1.2.3/32') def test_process_router_floating_ip_nat_rules_remove(self): ri = mock.MagicMock() diff -Nru neutron-2014.1.2/neutron/tests/unit/test_linux_dhcp.py neutron-2014.1.3/neutron/tests/unit/test_linux_dhcp.py --- neutron-2014.1.2/neutron/tests/unit/test_linux_dhcp.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_linux_dhcp.py 2014-10-02 23:25:23.000000000 +0000 @@ -706,7 +706,8 @@ self.assertTrue(mocks['_output_opts_file'].called) self.execute.assert_called_once_with(expected, root_helper='sudo', - check_exit_code=True) + check_exit_code=True, + extra_ok_codes=None) def test_spawn(self): self._test_spawn(['--conf-file=', '--domain=openstacklocal']) diff -Nru neutron-2014.1.2/neutron/tests/unit/test_linux_external_process.py neutron-2014.1.3/neutron/tests/unit/test_linux_external_process.py --- neutron-2014.1.2/neutron/tests/unit/test_linux_external_process.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_linux_external_process.py 2014-10-02 23:25:23.000000000 +0000 @@ -46,7 +46,8 @@ name.assert_called_once_with(ensure_pids_dir=True) self.execute.assert_called_once_with(['the', 'cmd'], root_helper='sudo', - check_exit_code=True) + check_exit_code=True, + extra_ok_codes=None) def test_enable_with_namespace(self): callback = mock.Mock() diff -Nru neutron-2014.1.2/neutron/tests/unit/test_linux_ip_lib.py neutron-2014.1.3/neutron/tests/unit/test_linux_ip_lib.py --- neutron-2014.1.2/neutron/tests/unit/test_linux_ip_lib.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_linux_ip_lib.py 2014-10-02 23:25:23.000000000 +0000 @@ -766,7 +766,8 @@ execute.assert_called_once_with(['ip', 'netns', 'exec', 'ns', 'ip', 'link', 'list'], root_helper='sudo', - check_exit_code=True) + check_exit_code=True, + extra_ok_codes=None) def test_execute_env_var_prepend(self): self.parent.namespace = 'ns' @@ -776,7 +777,8 @@ execute.assert_called_once_with( ['ip', 'netns', 'exec', 'ns', 'env', 'FOO=1', 'BAR=2', 'ip', 'link', 'list'], - root_helper='sudo', check_exit_code=True) + root_helper='sudo', check_exit_code=True, + extra_ok_codes=None) class TestDeviceExists(base.BaseTestCase): diff -Nru neutron-2014.1.2/neutron/tests/unit/test_metadata_agent.py neutron-2014.1.3/neutron/tests/unit/test_metadata_agent.py --- neutron-2014.1.2/neutron/tests/unit/test_metadata_agent.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_metadata_agent.py 2014-10-02 23:25:23.000000000 +0000 @@ -210,6 +210,8 @@ return {'ports': list_ports_retval.pop(0)} self.qclient.return_value.list_ports.side_effect = mock_list_ports + self.qclient.return_value.get_auth_info.return_value = { + 'auth_token': None, 'endpoint_url': None} instance_id, tenant_id = self.handler._get_instance_and_tenant_id(req) new_qclient_call = mock.call( username=FakeConf.admin_user, @@ -223,7 +225,8 @@ ca_cert=FakeConf.auth_ca_cert, endpoint_url=None, endpoint_type=FakeConf.endpoint_type) - expected = [new_qclient_call] + + expected = [] if router_id: expected.extend([ @@ -231,14 +234,16 @@ mock.call().list_ports( device_id=router_id, device_owner=constants.DEVICE_OWNER_ROUTER_INTF - ) + ), + mock.call().get_auth_info() ]) expected.extend([ new_qclient_call, mock.call().list_ports( network_id=networks or tuple(), - fixed_ips=['ip_address=192.168.1.1']) + fixed_ips=['ip_address=192.168.1.1']), + mock.call().get_auth_info() ]) self.qclient.assert_has_calls(expected) @@ -313,6 +318,67 @@ (None, None) ) + def test_auth_info_cache(self): + router_id = 'the_id' + networks = ('net1',) + list_ports = [ + [{'network_id': 'net1'}], + [{'device_id': 'did', 'tenant_id': 'tid', 'network_id': 'net1'}]] + + def update_get_auth_info(*args, **kwargs): + self.qclient.return_value.get_auth_info.return_value = { + 'auth_token': 'token', 'endpoint_url': 'uri'} + return {'ports': list_ports.pop(0)} + + self.qclient.return_value.list_ports.side_effect = update_get_auth_info + + new_qclient_call = mock.call( + username=FakeConf.admin_user, + tenant_name=FakeConf.admin_tenant_name, + region_name=FakeConf.auth_region, + auth_url=FakeConf.auth_url, + password=FakeConf.admin_password, + auth_strategy=FakeConf.auth_strategy, + token=None, + insecure=FakeConf.auth_insecure, + ca_cert=FakeConf.auth_ca_cert, + endpoint_url=None, + endpoint_type=FakeConf.endpoint_type) + + cached_qclient_call = mock.call( + username=FakeConf.admin_user, + tenant_name=FakeConf.admin_tenant_name, + region_name=FakeConf.auth_region, + auth_url=FakeConf.auth_url, + password=FakeConf.admin_password, + auth_strategy=FakeConf.auth_strategy, + token='token', + insecure=FakeConf.auth_insecure, + ca_cert=FakeConf.auth_ca_cert, + endpoint_url='uri', + endpoint_type=FakeConf.endpoint_type) + + headers = {'X-Forwarded-For': '192.168.1.10', + 'X-Neutron-Router-ID': router_id} + req = mock.Mock(headers=headers) + self.handler._get_instance_and_tenant_id(req) + + expected = [ + new_qclient_call, + mock.call().list_ports( + device_id=router_id, + device_owner=constants.DEVICE_OWNER_ROUTER_INTF + ), + mock.call().get_auth_info(), + cached_qclient_call, + mock.call().list_ports( + network_id=networks or tuple(), + fixed_ips=['ip_address=192.168.1.10']), + mock.call().get_auth_info(), + ] + + self.qclient.assert_has_calls(expected) + def _proxy_request_test_helper(self, response_code=200, method='GET'): hdrs = {'X-Forwarded-For': '8.8.8.8'} body = 'body' diff -Nru neutron-2014.1.2/neutron/tests/unit/test_policy.py neutron-2014.1.3/neutron/tests/unit/test_policy.py --- neutron-2014.1.2/neutron/tests/unit/test_policy.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_policy.py 2014-10-02 23:25:23.000000000 +0000 @@ -24,6 +24,7 @@ import neutron from neutron.api.v2 import attributes +from neutron.common import constants as const from neutron.common import exceptions from neutron import context from neutron import manager @@ -53,12 +54,14 @@ action = "example:test" with open(tmpfilename, "w") as policyfile: policyfile.write("""{"example:test": ""}""") + policy.init() policy.enforce(self.context, action, self.target) with open(tmpfilename, "w") as policyfile: policyfile.write("""{"example:test": "!"}""") # NOTE(vish): reset stored policy cache so we don't have to # sleep(1) policy._POLICY_CACHE = {} + policy.init() self.assertRaises(exceptions.PolicyNotAuthorized, policy.enforce, self.context, @@ -106,11 +109,13 @@ result = policy.check(self.context, action, self.target) self.assertEqual(result, False) - def test_check_if_exists_non_existent_action_raises(self): + def test_check_non_existent_action(self): action = "example:idonotexist" - self.assertRaises(exceptions.PolicyRuleNotFound, - policy.check_if_exists, - self.context, action, self.target) + result_1 = policy.check(self.context, action, self.target) + self.assertFalse(result_1) + result_2 = policy.check(self.context, action, self.target, + might_not_exist=True) + self.assertTrue(result_2) def test_enforce_good_action(self): action = "example:allowed" @@ -280,9 +285,11 @@ self.addCleanup(self.manager_patcher.stop) def _test_action_on_attr(self, context, action, attr, value, - exception=None): + exception=None, **kwargs): action = "%s_network" % action target = {'tenant_id': 'the_owner', attr: value} + if kwargs: + target.update(kwargs) if exception: self.assertRaises(exception, policy.enforce, context, action, target) @@ -291,10 +298,10 @@ self.assertEqual(result, True) def _test_nonadmin_action_on_attr(self, action, attr, value, - exception=None): + exception=None, **kwargs): user_context = context.Context('', "user", roles=['user']) self._test_action_on_attr(user_context, action, attr, - value, exception) + value, exception, **kwargs) def test_nonadmin_write_on_private_fails(self): self._test_nonadmin_action_on_attr('create', 'shared', False, @@ -311,9 +318,11 @@ def test_nonadmin_read_on_shared_succeeds(self): self._test_nonadmin_action_on_attr('get', 'shared', True) - def _test_enforce_adminonly_attribute(self, action): + def _test_enforce_adminonly_attribute(self, action, **kwargs): admin_context = context.get_admin_context() target = {'shared': True} + if kwargs: + target.update(kwargs) result = policy.enforce(admin_context, action, target) self.assertEqual(result, True) @@ -321,7 +330,14 @@ self._test_enforce_adminonly_attribute('create_network') def test_enforce_adminonly_attribute_update(self): - self._test_enforce_adminonly_attribute('update_network') + kwargs = {const.ATTRIBUTES_TO_UPDATE: ['shared']} + self._test_enforce_adminonly_attribute('update_network', **kwargs) + + def test_reset_adminonly_attr_to_default_fails(self): + kwargs = {const.ATTRIBUTES_TO_UPDATE: ['shared']} + self._test_nonadmin_action_on_attr('update', 'shared', False, + exceptions.PolicyNotAuthorized, + **kwargs) def test_enforce_adminonly_attribute_no_context_is_admin_policy(self): del self.rules[policy.ADMIN_CTX_POLICY] @@ -471,6 +487,7 @@ # Trigger a policy with rule admin_or_owner action = "create_network" target = {'tenant_id': 'fake'} + policy.init() self.assertRaises(exceptions.PolicyCheckError, policy.enforce, self.context, action, target) diff -Nru neutron-2014.1.2/neutron/tests/unit/test_security_groups_rpc.py neutron-2014.1.3/neutron/tests/unit/test_security_groups_rpc.py --- neutron-2014.1.2/neutron/tests/unit/test_security_groups_rpc.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/test_security_groups_rpc.py 2014-10-02 23:25:23.000000000 +0000 @@ -1467,6 +1467,9 @@ self.agent.init_firewall(defer_refresh_firewall=defer_refresh_firewall) self.iptables = self.agent.firewall.iptables + # TODO(jlibosva) Get rid of mocking iptables execute and mock out + # firewall instead + self.iptables.use_ipv6 = True self.iptables_execute = mock.patch.object(self.iptables, "execute").start() self.iptables_execute_return_values = [] diff -Nru neutron-2014.1.2/neutron/tests/unit/vmware/extensions/test_addresspairs.py neutron-2014.1.3/neutron/tests/unit/vmware/extensions/test_addresspairs.py --- neutron-2014.1.2/neutron/tests/unit/vmware/extensions/test_addresspairs.py 2014-08-07 22:55:55.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/vmware/extensions/test_addresspairs.py 2014-10-02 23:25:18.000000000 +0000 @@ -13,10 +13,19 @@ # License for the specific language governing permissions and limitations # under the License. +from neutron.extensions import allowedaddresspairs as addr_pair from neutron.tests.unit import test_extension_allowedaddresspairs as ext_pairs from neutron.tests.unit.vmware import test_nsx_plugin class TestAllowedAddressPairs(test_nsx_plugin.NsxPluginV2TestCase, ext_pairs.TestAllowedAddressPairs): - pass + + # TODO(arosen): move to ext_pairs.TestAllowedAddressPairs once all + # plugins do this correctly. + def test_create_port_no_allowed_address_pairs(self): + with self.network() as net: + res = self._create_port(self.fmt, net['network']['id']) + port = self.deserialize(self.fmt, res) + self.assertEqual(port['port'][addr_pair.ADDRESS_PAIRS], []) + self._delete('ports', port['port']['id']) diff -Nru neutron-2014.1.2/neutron/tests/unit/vmware/nsxlib/test_router.py neutron-2014.1.3/neutron/tests/unit/vmware/nsxlib/test_router.py --- neutron-2014.1.2/neutron/tests/unit/vmware/nsxlib/test_router.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/vmware/nsxlib/test_router.py 2014-10-02 23:25:23.000000000 +0000 @@ -21,6 +21,7 @@ from neutron.common import exceptions from neutron.openstack.common import uuidutils from neutron.plugins.vmware.api_client import exception as api_exc +from neutron.plugins.vmware.api_client import version as version_module from neutron.plugins.vmware.api_client.version import Version from neutron.plugins.vmware.common import exceptions as nsx_exc from neutron.plugins.vmware.common import utils @@ -920,3 +921,29 @@ routerlib.delete_nat_rules_by_match, self.fake_cluster, lrouter['uuid'], 'SomeWeirdType', 1, 1) + + def test_delete_nat_rules_by_match_len_mismatch_does_not_raise(self): + lrouter = self._prepare_nat_rules_for_delete_tests() + rules = routerlib.query_nat_rules(self.fake_cluster, lrouter['uuid']) + self.assertEqual(len(rules), 3) + deleted_rules = routerlib.delete_nat_rules_by_match( + self.fake_cluster, lrouter['uuid'], + 'DestinationNatRule', + max_num_expected=1, min_num_expected=1, + raise_on_len_mismatch=False, + destination_ip_addresses='99.99.99.99') + self.assertEqual(0, deleted_rules) + # add an extra rule to emulate a duplicate one + with mock.patch.object(self.fake_cluster.api_client, + 'get_version', + new=lambda: version_module.Version('2.0')): + routerlib.create_lrouter_snat_rule( + self.fake_cluster, lrouter['uuid'], + '10.0.0.2', '10.0.0.2', order=220, + match_criteria={'source_ip_addresses': '192.168.0.0/24'}) + deleted_rules_2 = routerlib.delete_nat_rules_by_match( + self.fake_cluster, lrouter['uuid'], 'SourceNatRule', + min_num_expected=1, max_num_expected=1, + raise_on_len_mismatch=False, + source_ip_addresses='192.168.0.0/24') + self.assertEqual(2, deleted_rules_2) diff -Nru neutron-2014.1.2/neutron/tests/unit/vmware/test_nsx_plugin.py neutron-2014.1.3/neutron/tests/unit/vmware/test_nsx_plugin.py --- neutron-2014.1.2/neutron/tests/unit/vmware/test_nsx_plugin.py 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/neutron/tests/unit/vmware/test_nsx_plugin.py 2014-10-02 23:25:23.000000000 +0000 @@ -14,6 +14,7 @@ # limitations under the License. import contextlib +import uuid import mock import netaddr @@ -943,17 +944,22 @@ with self.port() as p: private_sub = {'subnet': {'id': p['port']['fixed_ips'][0]['subnet_id']}} - with self.floatingip_no_assoc(private_sub) as fip: - port_id = p['port']['id'] - body = self._update('floatingips', fip['floatingip']['id'], - {'floatingip': {'port_id': port_id}}) - self.assertEqual(body['floatingip']['port_id'], port_id) - # Disassociate - body = self._update('floatingips', fip['floatingip']['id'], - {'floatingip': {'port_id': None}}) - body = self._show('floatingips', fip['floatingip']['id']) - self.assertIsNone(body['floatingip']['port_id']) - self.assertIsNone(body['floatingip']['fixed_ip_address']) + plugin = manager.NeutronManager.get_plugin() + with mock.patch.object(plugin, 'notify_routers_updated') as notify: + with self.floatingip_no_assoc(private_sub) as fip: + port_id = p['port']['id'] + body = self._update('floatingips', fip['floatingip']['id'], + {'floatingip': {'port_id': port_id}}) + self.assertEqual(body['floatingip']['port_id'], port_id) + # Disassociate + body = self._update('floatingips', fip['floatingip']['id'], + {'floatingip': {'port_id': None}}) + body = self._show('floatingips', fip['floatingip']['id']) + self.assertIsNone(body['floatingip']['port_id']) + self.assertIsNone(body['floatingip']['fixed_ip_address']) + + # check that notification was not requested + self.assertFalse(notify.called) def test_create_router_maintenance_returns_503(self): with self._create_l3_ext_network() as net: @@ -1145,12 +1151,21 @@ res = req.get_response(self.ext_api) self.assertEqual(res.status_int, 200) - def test_remove_router_interface_not_in_nsx(self): + def _test_remove_router_interface_nsx_out_of_sync(self, unsync_action): + # Create external network and subnet + ext_net_id = self._create_network_and_subnet('1.1.1.0/24', True)[0] # Create internal network and subnet int_sub_id = self._create_network_and_subnet('10.0.0.0/24')[1] res = self._create_router('json', 'tenant') router = self.deserialize('json', res) - # Add interface to router (needed to generate NAT rule) + # Set gateway and add interface to router (needed to generate NAT rule) + req = self.new_update_request( + 'routers', + {'router': {'external_gateway_info': + {'network_id': ext_net_id}}}, + router['router']['id']) + res = req.get_response(self.ext_api) + self.assertEqual(res.status_int, 200) req = self.new_action_request( 'routers', {'subnet_id': int_sub_id}, @@ -1158,7 +1173,7 @@ "add_router_interface") res = req.get_response(self.ext_api) self.assertEqual(res.status_int, 200) - self.fc._fake_lrouter_dict.clear() + unsync_action() req = self.new_action_request( 'routers', {'subnet_id': int_sub_id}, @@ -1167,6 +1182,27 @@ res = req.get_response(self.ext_api) self.assertEqual(res.status_int, 200) + def test_remove_router_interface_not_in_nsx(self): + + def unsync_action(): + self.fc._fake_lrouter_dict.clear() + self.fc._fake_lrouter_nat_dict.clear() + + self._test_remove_router_interface_nsx_out_of_sync(unsync_action) + + def test_remove_router_interface_nat_rule_not_in_nsx(self): + self._test_remove_router_interface_nsx_out_of_sync( + self.fc._fake_lrouter_nat_dict.clear) + + def test_remove_router_interface_duplicate_nat_rules_in_nsx(self): + + def unsync_action(): + # duplicate every entry in the nat rule dict + for (_rule_id, rule) in self.fc._fake_lrouter_nat_dict.items(): + self.fc._fake_lrouter_nat_dict[uuid.uuid4()] = rule + + self._test_remove_router_interface_nsx_out_of_sync(unsync_action) + def test_update_router_not_in_nsx(self): res = self._create_router('json', 'tenant') router = self.deserialize('json', res) diff -Nru neutron-2014.1.2/neutron.egg-info/PKG-INFO neutron-2014.1.3/neutron.egg-info/PKG-INFO --- neutron-2014.1.2/neutron.egg-info/PKG-INFO 2014-08-07 22:59:03.000000000 +0000 +++ neutron-2014.1.3/neutron.egg-info/PKG-INFO 2014-10-02 23:26:26.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.1 Name: neutron -Version: 2014.1.2 +Version: 2014.1.3 Summary: OpenStack Networking Home-page: http://www.openstack.org/ Author: OpenStack diff -Nru neutron-2014.1.2/neutron.egg-info/requires.txt neutron-2014.1.3/neutron.egg-info/requires.txt --- neutron-2014.1.2/neutron.egg-info/requires.txt 2014-08-07 22:59:03.000000000 +0000 +++ neutron-2014.1.3/neutron.egg-info/requires.txt 2014-10-02 23:26:26.000000000 +0000 @@ -2,7 +2,6 @@ Paste PasteDeploy>=1.5.0 Routes>=1.12.3,!=2.0 -amqplib>=0.6.1 anyjson>=0.3.3 argparse Babel>=1.3 diff -Nru neutron-2014.1.2/neutron.egg-info/SOURCES.txt neutron-2014.1.3/neutron.egg-info/SOURCES.txt --- neutron-2014.1.2/neutron.egg-info/SOURCES.txt 2014-08-07 22:59:03.000000000 +0000 +++ neutron-2014.1.3/neutron.egg-info/SOURCES.txt 2014-10-02 23:26:26.000000000 +0000 @@ -1083,6 +1083,10 @@ neutron/tests/functional/agent/linux/__init__.py neutron/tests/functional/agent/linux/test_async_process.py neutron/tests/functional/agent/linux/test_ovsdb_monitor.py +neutron/tests/functional/contrib/README +neutron/tests/functional/contrib/filters.template +neutron/tests/functional/contrib/gate_hook.sh +neutron/tests/functional/contrib/post_test_hook.sh neutron/tests/unit/__init__.py neutron/tests/unit/_test_extension_portbindings.py neutron/tests/unit/_test_rootwrap_exec.py @@ -1106,6 +1110,7 @@ neutron/tests/unit/test_config.py neutron/tests/unit/test_db_migration.py neutron/tests/unit/test_db_plugin.py +neutron/tests/unit/test_db_plugin_level.py neutron/tests/unit/test_db_rpc_base.py neutron/tests/unit/test_debug_commands.py neutron/tests/unit/test_dhcp_agent.py diff -Nru neutron-2014.1.2/neutron.egg-info/top_level.txt neutron-2014.1.3/neutron.egg-info/top_level.txt --- neutron-2014.1.2/neutron.egg-info/top_level.txt 2014-08-07 22:59:03.000000000 +0000 +++ neutron-2014.1.3/neutron.egg-info/top_level.txt 2014-10-02 23:26:26.000000000 +0000 @@ -1,2 +1,2 @@ -quantum neutron +quantum diff -Nru neutron-2014.1.2/PKG-INFO neutron-2014.1.3/PKG-INFO --- neutron-2014.1.2/PKG-INFO 2014-08-07 22:59:03.000000000 +0000 +++ neutron-2014.1.3/PKG-INFO 2014-10-02 23:26:26.000000000 +0000 @@ -1,6 +1,6 @@ Metadata-Version: 1.1 Name: neutron -Version: 2014.1.2 +Version: 2014.1.3 Summary: OpenStack Networking Home-page: http://www.openstack.org/ Author: OpenStack diff -Nru neutron-2014.1.2/requirements.txt neutron-2014.1.3/requirements.txt --- neutron-2014.1.2/requirements.txt 2014-08-07 22:56:02.000000000 +0000 +++ neutron-2014.1.3/requirements.txt 2014-10-02 23:25:23.000000000 +0000 @@ -3,7 +3,6 @@ Paste PasteDeploy>=1.5.0 Routes>=1.12.3,!=2.0 -amqplib>=0.6.1 anyjson>=0.3.3 argparse Babel>=1.3 diff -Nru neutron-2014.1.2/setup.cfg neutron-2014.1.3/setup.cfg --- neutron-2014.1.2/setup.cfg 2014-08-07 22:59:03.000000000 +0000 +++ neutron-2014.1.3/setup.cfg 2014-10-02 23:26:26.000000000 +0000 @@ -1,6 +1,6 @@ [metadata] name = neutron -version = 2014.1.2 +version = 2014.1.3 summary = OpenStack Networking description-file = README.rst