diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/AUTHORS neutron-taas-2.0.0/AUTHORS --- neutron-taas-1.0.1~git20170522.e15cbf3/AUTHORS 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/AUTHORS 1970-01-01 00:00:00.000000000 +0000 @@ -1,28 +0,0 @@ -Abhishek Raut -Andreas Jaeger -Anh Tran -Anil Rao -Dongcan Ye -Fawad Khaliq -Gary Kotton -Henry Gessau -Henry Gessau -James Page -Jeremy Stanley -Kazuhiro Suzuki -Monty Taylor -Ondřej Nový -Reedip -Reedip -Reedip Banerjee -Rui Zang -Trevor McCasland -YAMAMOTO Takashi -Yatin Kumbhare -Yoichiro Iura -cheng -ghanshyam -reedip -ricolin -venkatamahesh -vnyyad diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/ChangeLog neutron-taas-2.0.0/ChangeLog --- neutron-taas-1.0.1~git20170522.e15cbf3/ChangeLog 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/ChangeLog 1970-01-01 00:00:00.000000000 +0000 @@ -1,131 +0,0 @@ -CHANGES -======= - -* devstack: Update after systemd change -* Fix breakage due to neutron-lib update -* Use get_random_mac from neutron-lib -* Updated from global requirements -* [Fix gate]Update test requirement -* Use neutron-lib's context module -* Remove unused logging import -* Updated from global requirements - -1.0.0 ------ - -* Prepare for using standard python tests -* devstack: Stop using Q_PLUGIN_EXTRA_CONF_FILES -* Added link for modindex -* Add DB migration milestone for Ocata -* Enable placement on gate -* Switch to decorators.idempotent_id -* Add tap-as-a-service spec -* Remove unnecessary executable mode of some source files -* Updated from global requirements -* Updated from global requirements -* Use ExtensionDescriptor from neutron-lib -* Use tox_install convention -* Remove PLURALS -* Use the new neutron manager -* Import model_base from neutron-lib rather than neutron -* Include alembic migrations in module -* Updated from global requirements -* Updated from global requirements -* Add status field in the TaaS API -* Use neutron-lib modules for neutron_taas -* Disable VLAN id checks in create_tap_flow and delete_tap_flow -* Fix i18n translation imports -* Tag the alembic migration revisions for Newton -* Fix Bug 1623457 -* Switch to internal _i18n pattern, as per oslo_i18n guidelines -* Updated from global requirements -* Rename DB columns: tenant -> project -* Enable DeprecationWarning in test environments -* Bring models in sync with migrations, add test -* Fix a few db vs migration mismatch -* UT: Fix "RuntimeError: stop called on unstarted patcher" -* presentations.rst: Add another TaaS related Austin presentation -* Fix can't delete tap service on agent site -* Add Python 3.5 classifier and venv -* Add more unit tests to test DB mixin class for TaaS -* Add testresources to test-requirements -* Make get_plugin_type classmethod -* Provide support for tap-*-update -* Updated from global requirements -* Fix validation of direction on TapFlow -* Add unit tests to test DB mixin class for TaaS -* Fix typo in class name -* Updated from global requirements -* Support service framework to separate plugin logic -* Remove network-id from tap-service -* Use call_and_ignore_notfound_exc directly -* doc: reference OpenStack Austin presentation -* CLI: Turn comments into docstring -* Fix tap-flow CLI to take service parameter -* Fix help text for tap flow command -* Remove new= argument from create_connection -* Initialize alembic branches for TaaS repo -* devstackgaterc: Remove stale reference to n-obj -* Dont pass tenant-id in request unless given -* Improve exception messages -* Fix API end point for tapservice/tapflow -* Updated from global requirements -* devstackgaterc: Disable unrelated services -* Update tempest plugin after tempest-lib deprecation -* Fix Gate tests -* Avoid using _get_tenant_id_for_create -* Add report to coverage -* Deprecated tox -downloadcache option removed -* Change stackforge to openstack -* Fix in install and api guide -* Remove old TaaS CLI repo, update API reference -* Re-implement TaaS CLI as a NeutronClient extension -* Fix db error when running python34 unit tests -* Fix TempestPlugin function signature -* Delete python bytecode before every test run -* Create specs directory -* devstackgaterc: Specify tests to run on gate -* Add a few dumb api tests -* Add TaaS services client for tempest -* Add a skeleton tempest plugin -* Remove old install instructions and script -* Add a minimum documentation for devstack plugin -* Add devstack plugin -* unit tests: Use the correct base class -* Add alembic migration -* Add an entry point for the service plugin -* Add entry points for cli commands -* Fix taas_init_ops -* INSTALL.rst: Tweak the description of ML2 extension driver -* Updated from global requirements -* Remove repos.py replacement -* Move constants.TAAS to our local module for now -* Update .gitreview for new namespace -* Add tests for the rest of plugin methods -* Add some more tests -* tox.ini: Fix cover by giving the source directory explicitly -* tox.ini: Fix a typo in cover env -* Add a few more tests -* setup.cfg: Drop python 2.6 and 3.3 -* Start writing tests -* Move topics definitions to our own module for now -* Python3 support improvement -* test-requirements: Remove discover -* Remove some commented out code -* Misc documentation improvements -* Change ignore-errors to ignore_errors -* Improved install script and updated the install readme file -* Removing the reference shell script as now its part of the taas linux ovs driver -* Deleting neutron configuration file as its no longer needed in the dependencies folder -* Edited the READEM.rst file to refer API Reference document -* Adding the API Reference guide for TaaS -* Bug fix in the __init__.py file -* Fixed misssing path prefix in taas_plugin.py -* Changing all oslo.config to oslo_config -* This is the initial commit of TaaS sources code -* Changed tox.ini file to reflect only py27 and pep8 -* Adding an install script to install TaaS -* Changing the README.rst file for tap-as-a-service -* Changed the Readme file and Added INSTALL.rst file -* Initial Cookiecutter Commit -* Added .gitreview diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/debian/changelog neutron-taas-2.0.0/debian/changelog --- neutron-taas-1.0.1~git20170522.e15cbf3/debian/changelog 2017-05-23 11:16:17.000000000 +0000 +++ neutron-taas-2.0.0/debian/changelog 2017-08-31 02:44:49.000000000 +0000 @@ -1,8 +1,18 @@ -neutron-taas (1.0.1~git20170522.e15cbf3-0ubuntu1~cloud0) xenial-pike; urgency=medium +neutron-taas (2.0.0-0ubuntu1~cloud0) xenial-pike; urgency=medium * New upstream release for the Ubuntu Cloud Archive. - -- Openstack Ubuntu Testing Bot Tue, 23 May 2017 11:16:17 +0000 + -- Openstack Ubuntu Testing Bot Thu, 31 Aug 2017 02:44:49 +0000 + +neutron-taas (2.0.0-0ubuntu1) artful; urgency=medium + + * New upstream release for OpenStack Pike. + * d/control: Align (Build-)Depends with upstream. + * d/neutron-taas-openvswitch-agent.init.in, d/python-neutron-taas.install: + Dropped init file and install of /usr/bin as neutron-taas-openvswitch-agent + has been dropped in switch to supporting L2 agent extensions. + + -- Corey Bryant Wed, 30 Aug 2017 17:08:37 -0400 neutron-taas (1.0.1~git20170522.e15cbf3-0ubuntu1) artful; urgency=medium diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/debian/control neutron-taas-2.0.0/debian/control --- neutron-taas-1.0.1~git20170522.e15cbf3/debian/control 2017-05-23 09:18:20.000000000 +0000 +++ neutron-taas-2.0.0/debian/control 2017-08-30 21:08:37.000000000 +0000 @@ -8,11 +8,11 @@ dh-python, openstack-pkg-tools, python-all, - python-pbr, + python-pbr (>= 2.0.0), python-setuptools, python-sphinx, Build-Depends-Indep: python-babel (>= 2.3.4), - python-coverage (>= 3.6), + python-coverage (>= 4.0), python-hacking, python-neutron (>= 2:9.0.0~), python-os-testr (>= 0.8.0), diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/debian/neutron-taas-openvswitch-agent.init.in neutron-taas-2.0.0/debian/neutron-taas-openvswitch-agent.init.in --- neutron-taas-1.0.1~git20170522.e15cbf3/debian/neutron-taas-openvswitch-agent.init.in 2017-05-23 09:18:20.000000000 +0000 +++ neutron-taas-2.0.0/debian/neutron-taas-openvswitch-agent.init.in 1970-01-01 00:00:00.000000000 +0000 @@ -1,19 +0,0 @@ -#!/bin/sh -### BEGIN INIT INFO -# Provides: neutron-taas-openvswitch-agent -# Required-Start: $network $local_fs $remote_fs $syslog -# Required-Stop: $remote_fs openvswitch-switch -# Should-Start: mysql postgresql rabbitmq-server keystone neutron-openvswitch-agent -# Should-Stop: mysql postgresql rabbitmq-server keystone -# Default-Start: 2 3 4 5 -# Default-Stop: 0 1 6 -# Short-Description: Neutron Tap-as-a-Service Open vSwitch Agent -# Description: Open vSwitch agent for OpenStack Neutron TaaS extension -### END INIT INFO - - -DESC="Openstack Neutron - TaaS Open vSwitch Agent" -PROJECT_NAME=neutron -NAME=${PROJECT_NAME}-taas-openvswitch-agent -DAEMON=/usr/bin/neutron-taas-openvswitch-agent -DAEMON_ARGS="--config-file=/etc/neutron/taas.ini" diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/debian/python-neutron-taas.install neutron-taas-2.0.0/debian/python-neutron-taas.install --- neutron-taas-1.0.1~git20170522.e15cbf3/debian/python-neutron-taas.install 2017-05-23 09:18:20.000000000 +0000 +++ neutron-taas-2.0.0/debian/python-neutron-taas.install 2017-08-30 21:08:37.000000000 +0000 @@ -1,3 +1,2 @@ -/usr/bin /usr/lib/python2* etc/taas_plugin.ini /etc/neutron diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/devstack/devstackgaterc neutron-taas-2.0.0/devstack/devstackgaterc --- neutron-taas-1.0.1~git20170522.e15cbf3/devstack/devstackgaterc 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/devstack/devstackgaterc 2017-08-10 19:02:30.000000000 +0000 @@ -22,6 +22,7 @@ OVERRIDE_ENABLED_SERVICES=key,mysql,rabbit OVERRIDE_ENABLED_SERVICES+=,g-api,g-reg OVERRIDE_ENABLED_SERVICES+=,n-api,n-cond,n-cpu,n-crt,n-sch,placement-api +OVERRIDE_ENABLED_SERVICES+=,n-api-meta OVERRIDE_ENABLED_SERVICES+=,q-agt,q-dhcp,q-l3,q-meta,q-metering,q-svc,quantum OVERRIDE_ENABLED_SERVICES+=,taas,taas_openvswitch_agent OVERRIDE_ENABLED_SERVICES+=,tempest,dstat diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/devstack/plugin.sh neutron-taas-2.0.0/devstack/plugin.sh --- neutron-taas-1.0.1~git20170522.e15cbf3/devstack/plugin.sh 2017-05-22 09:22:02.000000000 +0000 +++ neutron-taas-2.0.0/devstack/plugin.sh 2017-08-10 19:02:30.000000000 +0000 @@ -31,20 +31,6 @@ _neutron_service_plugin_class_add taas } -function configure_taas_openvswitch_agent { - local conf=$TAAS_OVS_AGENT_CONF_FILE - - cp $TAAS_PLUGIN_PATH/etc/taas.ini $conf - iniset $conf taas driver neutron_taas.services.taas.drivers.linux.ovs_taas.OvsTaasDriver - iniset $conf taas enabled True - iniset $conf taas vlan_range_start 3000 - iniset $conf taas vlan_range_end 3500 -} - -function start_taas_openvswitch_agent { - run_process taas_openvswitch_agent "$TAAS_OVS_AGENT_BINARY --config-file $NEUTRON_CONF --config-file $TAAS_OVS_AGENT_CONF_FILE" -} - if is_service_enabled taas; then if [[ "$1" == "stack" ]]; then if [[ "$2" == "pre-install" ]]; then @@ -67,18 +53,20 @@ fi fi -if is_service_enabled taas_openvswitch_agent; then +if is_service_enabled q-agt neutron-agent; then if [[ "$1" == "stack" ]]; then if [[ "$2" == "pre-install" ]]; then : elif [[ "$2" == "install" ]]; then install_taas elif [[ "$2" == "post-config" ]]; then - configure_taas_openvswitch_agent + if is_service_enabled q-agt neutron-agent; then + source $NEUTRON_DIR/devstack/lib/l2_agent + plugin_agent_add_l2_agent_extension taas + configure_l2_agent + fi elif [[ "$2" == "extra" ]]; then - # NOTE(yamamoto): This agent should be run after ovs-agent - # sets up its bridges. (bug 1515104) - start_taas_openvswitch_agent + : fi elif [[ "$1" == "unstack" ]]; then : diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/devstack/README.rst neutron-taas-2.0.0/devstack/README.rst --- neutron-taas-1.0.1~git20170522.e15cbf3/devstack/README.rst 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/devstack/README.rst 2017-08-10 19:02:30.000000000 +0000 @@ -7,5 +7,4 @@ [[local|localrc]] enable_plugin tap-as-a-service https://github.com/openstack/tap-as-a-service enable_service taas - enable_service taas_openvswitch_agent TAAS_SERVICE_DRIVER=TAAS:TAAS:neutron_taas.services.taas.service_drivers.taas_rpc.TaasRpcDriver:default diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/doc/source/specs/index.rst neutron-taas-2.0.0/doc/source/specs/index.rst --- neutron-taas-1.0.1~git20170522.e15cbf3/doc/source/specs/index.rst 1970-01-01 00:00:00.000000000 +0000 +++ neutron-taas-2.0.0/doc/source/specs/index.rst 2017-08-10 19:02:30.000000000 +0000 @@ -0,0 +1,14 @@ +.. tap-as-a-service specs documentation index + +============== +Specifications +============== + +Mitaka specs +============ + +.. toctree:: + :glob: + :maxdepth: 1 + + mitaka/* diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/doc/source/specs/mitaka/tap-as-a-service.rst neutron-taas-2.0.0/doc/source/specs/mitaka/tap-as-a-service.rst --- neutron-taas-1.0.1~git20170522.e15cbf3/doc/source/specs/mitaka/tap-as-a-service.rst 1970-01-01 00:00:00.000000000 +0000 +++ neutron-taas-2.0.0/doc/source/specs/mitaka/tap-as-a-service.rst 2017-08-10 19:02:30.000000000 +0000 @@ -0,0 +1,474 @@ +.. + This work is licensed under a Creative Commons Attribution 3.0 Unported + License. + + http://creativecommons.org/licenses/by/3.0/legalcode + +============================ +Tap-as-a-Service for Neutron +============================ + + +Launchpad blueprint: + + https://blueprints.launchpad.net/neutron/+spec/tap-as-a-service + +This spec explains an extension for the port mirroring functionality. Port +mirroring involves sending a copy of packets ingressing and/or egressing one +port (where ingress means entering a VM and egress means leaving a VM) to +another port, (usually different from the packet's original destination). +A port could be attached to a VM or a networking resource like router. + +While the blueprint describes the functionality of mirroring Neutron ports as +an extension to the port object, the spec proposes to offer port mirroring as a +service, which will enable more advanced use-cases (e.g. intrusion detection) +to be deployed. + +The proposed port mirroring capability shall be introduced in Neutron as a +service called "Tap-as-a-Service". + +Problem description +=================== + +Neutron currently does not support the functionality of port mirroring for +tenant networks. This feature will greatly benefit tenants and admins, who +want to debug their virtual networks and gain visibility into their VMs by +monitoring and analyzing the network traffic associated with them (e.g. IDS). + +This spec focuses on mirroring traffic from one Neutron port to another; +future versions may address mirroring from a Neutron port to an arbitrary +interface (not managed by Neutron) on a compute host or the network controller. + +Different usage scenarios for the service are listed below: + + 1. Tapping/mirroring network traffic ingressing and/or egressing a particular + Neutron port. + 2. Tapping/mirroring all network traffic on a tenant network. + 3. Tenant or admin will be able to do tap/traffic mirroring based on a + policy rule and set destination as a Neutron port, which can be linked + to a virtual machine as normal Nova operations or to a physical machine + via l2-gateway functionality. + 4. Admin will be able to do packet level network debugging for the virtual + network. + 5. Provide a way for real time analytics based on different criteria, like + tenants, ports, traffic types (policy) etc. + +Note that some of the above use-cases are not covered by this proposal, at +least for the first step. + + +Proposed change +=============== + +The proposal is to introduce a new Neutron service plugin, called +"Tap-as-a-Service", +which provides tapping (port-mirroring) capability for Neutron networks; +tenant or provider networks. This service will be modeled similar to other +Neutron services such as the firewall, load-balancer, L3-router etc. + +The proposed service will allow the tenants to create a tap service instance +to which they can add Neutron ports that need to be mirrored by creating tap +flows. The tap service itself will be a Neutron port, which will be the +destination port for the mirrored traffic. + +The destination Tap-as-a-Service Neutron port should be created beforehand on +a network owned by the tenant who is requesting the service. The ports to be +mirrored that are added to the service must be owned by the same tenant who +created the tap service instance. Even on a shared network, a tenant will only +be allowed to mirror the traffic from ports that they own on the shared +network and not traffic from ports that they do not own on the shared network. + +The ports owned by the tenant that are mirrored can be on networks other +than the network on which tap service port is created. This allows the tenant +to mirror traffic from any port it owns on a network on to the same +Tap-as-a-Service Neutron port. + +The tenant can launch a VM specifying the tap destination port for the VM +interface (--nic port-id=tap_port_uuid), thus receiving mirrored traffic for +further processing (dependent on use case) on that VM. + +The following would be the work flow for using this service from a tenant's +point of view + + 0. Create a Neutron port which will be used as the destination port. + This can be a part of ordinary VM launch. + + 1. Create a tap service instance, specifying the Neutron port. + + 2. If you haven't yet, launch a monitoring or traffic analysis VM and + connect it to the destination port for the tap service instance. + + 3. Associate Neutron ports with a tap service instance if/when they need to be + monitored. + + 4. Disassociate Neutron ports from a tap service instance if/when they no + longer need to be monitored. + + 5. Destroy a tap-service instance when it is no longer needed. + + 6. Delete the destination port when it is no longer neeeded. + +Please note that the normal work flow of launching a VM is not affected while +using TaaS. + + +Alternatives +------------ + +As an alternative to introducing port mirroring functionality under Neutron +services, it could be added as an extension to the existing Neutron v2 APIs. + + +Data model impact +----------------- + +Tap-as-a-Service introduces the following data models into Neutron as database +schemas. + +1. tap_service + ++-------------+--------+----------+-----------+---------------+-------------------------+ +| Attribute | Type | Access | Default | Validation/ | Description | +| Name | | (CRUD) | Value | Conversion | | ++=============+========+==========+===========+===============+=========================+ +| id | UUID | R, all | generated | N/A | UUID of the tap | +| | | | | | service instance. | ++-------------+--------+----------+-----------+---------------+-------------------------+ +| project_id | String | CR, all | Requester | N/A | ID of the | +| | | | | | project creating | +| | | | | | the service | ++-------------+--------+----------+-----------+---------------+-------------------------+ +| name | String | CRU, all | Empty | N/A | Name for the service | +| | | | | | instance. | ++-------------+--------+----------+-----------+---------------+-------------------------+ +| description | String | CRU, all | Empty | N/A | Description of the | +| | | | | | service instance. | ++-------------+--------+----------+-----------+---------------+-------------------------+ +| port_id | UUID | CR, all | N/A | UUID of a | An existing Neutron port| +| | | | | valid Neutron | to which traffic will | +| | | | | port | be mirrored | ++-------------+--------+----------+-----------+---------------+-------------------------+ +| status | String | R, all | N/A | N/A | The operation status of | +| | | | | | the resource | +| | | | | | (ACTIVE, PENDING_foo, | +| | | | | | ERROR, ...) | ++-------------+--------+----------+-----------+---------------+-------------------------+ + +2. tap_flow + ++----------------+--------+----------+-----------+---------------+-------------------------+ +| Attribute | Type | Access | Default | Validation/ | Description | +| Name | | (CRUD) | Value | Conversion | | ++================+========+==========+===========+===============+=========================+ +| id | UUID | R, all | generated | N/A | UUID of the | +| | | | | | tap flow instance. | ++----------------+--------+----------+-----------+---------------+-------------------------+ +| name | String | CRU, all | Empty | N/A | Name for the tap flow | +| | | | | | instance. | ++----------------+--------+----------+-----------+---------------+-------------------------+ +| description | String | CRU, all | Empty | N/A | Description of the | +| | | | | | tap flow instance. | ++----------------+--------+----------+-----------+---------------+-------------------------+ +| tap_service_id | UUID | CR, all | N/A | Valid tap | UUID of the tap | +| | | | | service UUID | service instance. | ++----------------+--------+----------+-----------+---------------+-------------------------+ +| source_port | UUID | CR, all | N/A | UUID of a | UUID of the Neutron | +| | | | | valid Neutron | port that needed to be | +| | | | | port | mirrored | ++----------------+--------+----------+-----------+---------------+-------------------------+ +| direction | ENUM | CR, all | BOTH | | Whether to mirror the | +| | (IN, | | | | traffic leaving or | +| | OUT, | | | | arriving at the | +| | BOTH) | | | | source port | +| | | | | | IN: Network -> VM | +| | | | | | OUT: VM -> Network | ++----------------+--------+----------+-----------+---------------+-------------------------+ +| status | String | R, all | N/A | N/A | The operation status of | +| | | | | | the resource | +| | | | | | (ACTIVE, PENDING_foo, | +| | | | | | ERROR, ...) | ++----------------+--------+----------+-----------+---------------+-------------------------+ + + +REST API impact +--------------- + +Tap-as-a-Service shall be offered over the RESTFull API interface under +the following namespace: + +http://wiki.openstack.org/Neutron/TaaS/API_1.0 + +The resource attribute map for TaaS is provided below: + +.. code-block:: python + + direction_enum = ['IN', 'OUT', 'BOTH'] + + RESOURCE_ATTRIBUTE_MAP = { + 'tap_service': { + 'id': {'allow_post': False, 'allow_put': False, + 'validate': {'type:uuid': None}, 'is_visible': True, + 'primary_key': True}, + 'project_id': {'allow_post': True, 'allow_put': False, + 'validate': {'type:string': None}, + 'required_by_policy': True, 'is_visible': True}, + 'name': {'allow_post': True, 'allow_put': True, + 'validate': {'type:string': None}, + 'is_visible': True, 'default': ''}, + 'description': {'allow_post': True, 'allow_put': True, + 'validate': {'type:string': None}, + 'is_visible': True, 'default': ''}, + 'port_id': {'allow_post': True, 'allow_put': False, + 'validate': {'type:uuid': None}, + 'is_visible': True}, + 'status': {'allow_post': False, 'allow_put': False, + 'is_visible': True}, + }, + 'tap_flow': { + 'id': {'allow_post': False, 'allow_put': False, + 'validate': {'type:uuid': None}, 'is_visible': True, + 'primary_key': True}, + 'name': {'allow_post': True, 'allow_put': True, + 'validate': {'type:string': None}, + 'is_visible': True, 'default': ''}, + 'description': {'allow_post': True, 'allow_put': True, + 'validate': {'type:string': None}, + 'is_visible': True, 'default': ''}, + 'tap_service_id': {'allow_post': True, 'allow_put': False, + 'validate': {'type:uuid': None}, + 'required_by_policy': True, 'is_visible': True}, + 'source_port': {'allow_post': True, 'allow_put': False, + 'validate': {'type:uuid': None}, + 'required_by_policy': True, 'is_visible': True}, + 'direction': {'allow_post': True, 'allow_put': False, + 'validate': {'type:string': direction_enum}, + 'is_visible': True}, + 'status': {'allow_post': False, 'allow_put': False, + 'is_visible': True}, + } + } + + +Security impact +--------------- + +A TaaS instance comprises a collection of source Neutron ports (whose +ingress and/or egress traffic are being mirrored) and a destination Neutron +port (where the mirrored traffic is received). Security Groups will be +handled differently for these two classes of ports, as described below: + +Destination Side: + +Ingress Security Group filters, including the filter that prevents MAC-address +spoofing, will be disabled for the destination Neutron port. This will ensure +that all of the mirrored packets received at this port are able to reach the +monitoring VM attached to it. + +Source Side: + +Ideally it would be nice to mirror all packets entering and/or leaving the +virtual NICs associated with the VMs that are being monitored. This means +capturing ingress traffic after it passes the inbound Security Group filters +and capturing egress traffic before it passes the outbound Security Group +filters. + +However, due to the manner in which Security Groups are currently implemented +in OpenStack (i.e. north of the Open vSwitch ports, using Linux IP Tables) this +is not possible because port mirroring support resides inside Open vSwitch. +Therefore, in the first version of TaaS, Security Groups will be ignored for +the source Neutron ports; this effectively translates into capturing ingress +traffic before it passes the inbound Security Group filters and capturing +egress traffic after it passes the outbound Security Group filters. In other +words, port mirroring will be implemented for all packets entering and/or +leaving the Open vSwitch ports associated with the respective virtual NICs of +the VMs that are being monitored. + +There is a separate effort that has been initiated to implement Security Groups +within OpenvSwitch. A later version of TaaS may make use of this feature, if +and when it is available, so that we can realize the ideal behavior described +above. It should be noted that such an enhancement should not require a change +to the TaaS data model. + +Keeping data privacy aspects in mind and preventing the data center admin +from snooping on tenant's network traffic without their knowledge, the admin +shall not be allowed to mirror traffic from any ports that belong to tenants. +Hence creation of 'Tap_Flow' is only permitted on ports that are owned by the +creating tenant. + +If an admin wants to monitor tenant's traffic, the admin will have to join that +tenant as a member. This will ensure that the tenant is aware that the admin +might be monitoring their traffic. + + +Notifications impact +-------------------- + +A set of new RPC calls for communication between the TaaS server and agents +are required and will be put in place as part of the reference implementation. + + +IPv6 impact +-------------------- +None + + +Other end user impact +--------------------- + +Users will be able to invoke and access the TaaS APIs through +python-neutronclient. + + +Performance Impact +------------------ + +The performance impact of mirroring traffic needs to be examined and +quantified. The impact of a tenant potentially mirroring all traffic from +all ports could be large and needs more examination. + +Some alternatives to reduce the amount of mirrored traffic are listed below. + + 1. Rate limiting on the ports being mirrored. + 2. Filters to select certain flows ingressing/egressing a port to be + mirrored. + 3. Having a quota on the number of TaaS Flows that can be defined by the + tenant. + + +Other deployer impact +--------------------- + +Configurations for the service plugin will be added later. + +A new bridge (br-tap) mentioned in Implementation section. + + +Developer impact +---------------- +This will be a new extension API, and will not affect the existing API. + + +Community impact +---------------- +None + + +Follow up work +-------------- + +Going forward, TaaS would be incorporated with Service Insertion [2]_ similar +to other existing services like FWaaS, LBaaS, and VPNaaS. + +While integrating Tap-as-a-Service with Service Insertion the key changes to +the data model needed would be the removal of 'network_id' and 'port_id' from +the 'Tap_Service' data model. + +Some policy based filtering rules would help alleviate the potential performance +issues. + +We might want to ensure exclusive use of the destination port. + +We might want to create the destination port automatically on tap-service +creation, rather than specifying an existing port. In that case, network_id +should be taken as a parameter for tap-service creation, instead of port_id. + +We might want to allow the destination port be used for purposes other than +just launching a VM on it, for example the port could be used as an +'external-port' [1]_ to get the mirrored data out from the tenant virtual +network on a device or network not managed by openstack. + +We might want to introduce a way to tap a whole traffic for the specified +network. + +We need a mechanism to coordinate usage of various resources with other +agent extensions. E.g. OVS flows, tunnel IDs, VLAN IDs. + + +Implementation +============== + +The reference implementation for TaaS will be based on Open vSwitch. In +addition to the existing integration (br-int) and tunnel (br-tun) bridges, a +separate tap bridge (br-tap) will be used. The tap bridge provides nice +isolation for supporting more complex TaaS features (e.g. filtering mirrored +packets) in the future. + +The tapping operation will be realized by adding higher priority flows in +br-int, which duplicate the ingress and/or egress packets associated with +specific ports (belonging to the VMs being monitored) and send the copies to +br-tap. Packets sent to br-tap will also be tagged with an appropriate VLAN id +corresponding to the associated TaaS instance (in the initial release these +VLAN ids may be reserved from highest to lowest; in later releases it should be +coordinated with the Neutron service). The original packets will continue to be +processed normally, so as not to affect the traffic patterns of the VMs being +monitored. + +Flows will be placed in br-tap to determine if the mirrored traffic should be +sent to br-tun or not. If the destination port of a Tap-aaS instance happens to +reside on the same host as a source port, packets from that source port will be +returned to br-int; otherwise they will be forwarded to br-tun for delivery to +a remote node. + +Packets arriving at br-tun from br-tap will get routed to the destination ports +of appropriate TaaS instances using the same GRE or VXLAN tunnel network that +is used to pass regular traffic between hosts. Separate tunnel IDs will be used +to isolate different TaaS instances from one another and from the normal +(non-mirrored) traffic passing through the bridge. This will ensure that proper +action can be taken on the receiving end of a tunnel so that mirrored traffic +is sent to br-tap instead of br-int. Special flows will be used in br-tun to +automatically learn about the location of the destination ports of TaaS +instances. + +Packets entering br-tap from br-tun will be forwarded to br-int only if the +destination port of the corresponding TaaS instance resides on the same host. +Finally, packets entering br-int from br-tap will be delivered to the +appropriate destination port after the TaaS instance VLAN id is replaced with +the VLAN id for the port. + + +Assignee(s) +----------- + +* Vinay Yadhav + + +Work Items +---------- + +* TaaS API and data model implementation. +* TaaS OVS driver. +* OVS agent changes for port mirroring. + + +Dependencies +============ + +None + + +Testing +======= + +* Unit Tests to be added. +* Functional tests in tempest to be added. +* API Tests in Tempest to be added. + + +Documentation Impact +==================== + +* User Documentation needs to be updated +* Developer Documentation needs to be updated + + +References +========== + +.. [1] External port + https://review.openstack.org/#/c/87825 + +.. [2] Service base and insertion + https://review.openstack.org/#/c/93128 + +.. [3] NFV unaddressed interfaces + https://review.openstack.org/#/c/97715/ diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/.gitignore neutron-taas-2.0.0/.gitignore --- neutron-taas-1.0.1~git20170522.e15cbf3/.gitignore 1970-01-01 00:00:00.000000000 +0000 +++ neutron-taas-2.0.0/.gitignore 2017-08-10 19:02:30.000000000 +0000 @@ -0,0 +1,54 @@ +*.py[cod] + +# C extensions +*.so + +# Packages +*.egg +*.egg-info +dist +build +.eggs +eggs +parts +bin +var +sdist +develop-eggs +.installed.cfg +lib +lib64 + +# Installer logs +pip-log.txt + +# Unit test / coverage reports +.coverage +.tox +nosetests.xml +.testrepository +.venv + +# Translations +*.mo + +# Mr Developer +.mr.developer.cfg +.project +.pydevproject + +# Complexity +output/*.html +output/*/index.html + +# Sphinx +doc/build + +# pbr generates these +AUTHORS +ChangeLog + +# Editors +*~ +.*.swp +.*sw? diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/.gitreview neutron-taas-2.0.0/.gitreview --- neutron-taas-1.0.1~git20170522.e15cbf3/.gitreview 1970-01-01 00:00:00.000000000 +0000 +++ neutron-taas-2.0.0/.gitreview 2017-08-10 19:02:30.000000000 +0000 @@ -0,0 +1,4 @@ +[gerrit] +host=review.openstack.org +port=29418 +project=openstack/tap-as-a-service.git diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/db/migration/alembic_migration/versions/CONTRACT_HEAD neutron-taas-2.0.0/neutron_taas/db/migration/alembic_migration/versions/CONTRACT_HEAD --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/db/migration/alembic_migration/versions/CONTRACT_HEAD 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/db/migration/alembic_migration/versions/CONTRACT_HEAD 2017-08-10 19:02:30.000000000 +0000 @@ -1 +1 @@ -4086b3cffc01 +bac61f603e39 diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/db/migration/alembic_migration/versions/pike/contract/bac61f603e39_alter_tap_id_associations_to_support_tap_id_reuse.py neutron-taas-2.0.0/neutron_taas/db/migration/alembic_migration/versions/pike/contract/bac61f603e39_alter_tap_id_associations_to_support_tap_id_reuse.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/db/migration/alembic_migration/versions/pike/contract/bac61f603e39_alter_tap_id_associations_to_support_tap_id_reuse.py 1970-01-01 00:00:00.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/db/migration/alembic_migration/versions/pike/contract/bac61f603e39_alter_tap_id_associations_to_support_tap_id_reuse.py 2017-08-10 19:02:30.000000000 +0000 @@ -0,0 +1,50 @@ +# Copyright 2016-17 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +"""Alter TapIdAssociations to support tap id reuse + +Revision ID: bac61f603e39 +Revises: 4086b3cffc01 +Create Date: 2016-07-27 09:31:54.200165 + +""" + +# revision identifiers, used by Alembic. +revision = 'bac61f603e39' +down_revision = '4086b3cffc01' + +from alembic import op +from sqlalchemy.engine import reflection + +import sqlalchemy as sa + +TABLE_NAME = 'tap_id_associations' + + +def upgrade(): + inspector = reflection.Inspector.from_engine(op.get_bind()) + fk_constraints = inspector.get_foreign_keys(TABLE_NAME) + for fk in fk_constraints: + op.drop_constraint(fk['name'], TABLE_NAME, type_='foreignkey') + + op.create_foreign_key('fk_tap_id_assoc_tap_service', TABLE_NAME, + 'tap_services', ['tap_service_id'], ['id'], + ondelete='SET NULL') + + op.alter_column(TABLE_NAME, 'taas_id', autoincrement=False, + existing_type=sa.INTEGER, nullable=False) + op.alter_column(TABLE_NAME, 'tap_service_id', + existing_type=sa.String(36), nullable=True) + op.create_unique_constraint('unique_taas_id', TABLE_NAME, + ['taas_id']) diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/db/taas_db.py neutron-taas-2.0.0/neutron_taas/db/taas_db.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/db/taas_db.py 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/db/taas_db.py 2017-08-10 19:02:30.000000000 +0000 @@ -23,10 +23,10 @@ from neutron_lib.db import model_base from neutron_lib.plugins import directory from neutron_taas.extensions import taas +from oslo_config import cfg from oslo_log import log as logging from oslo_utils import uuidutils - LOG = logging.getLogger(__name__) @@ -68,12 +68,13 @@ __tablename__ = 'tap_id_associations' tap_service_id = sa.Column(sa.String(36), sa.ForeignKey("tap_services.id", - ondelete='CASCADE')) - taas_id = sa.Column(sa.Integer, primary_key=True, autoincrement=True) + ondelete='SET NULL'), + nullable=True) + taas_id = sa.Column(sa.Integer, primary_key=True, unique=True) tap_service = orm.relationship( TapService, backref=orm.backref("tap_service_id", - lazy="joined", cascade="delete"), + lazy="joined"), primaryjoin='TapService.id==TapIdAssociation.tap_service_id') @@ -147,14 +148,47 @@ return self._make_tap_service_dict(tap_service_db) + def _rebuild_taas_id_allocation_range(self, context): + query = context.session.query( + TapIdAssociation).all() + + allocate_taas_id_list = [_q.taas_id for _q in query] + first_taas_id = cfg.CONF.taas.vlan_range_start + # Exclude range end + last_taas_id = cfg.CONF.taas.vlan_range_end + all_taas_id_set = set(range(first_taas_id, last_taas_id)) + vaild_taas_id_set = all_taas_id_set - set(allocate_taas_id_list) + + for _id in vaild_taas_id_set: + # new taas id + context.session.add(TapIdAssociation( + taas_id=_id)) + + def _allocate_taas_id_with_tap_service_id(self, context, tap_service_id): + query = context.session.query(TapIdAssociation).filter_by( + tap_service_id=None).first() + if not query: + self._rebuild_taas_id_allocation_range(context) + # try again + query = context.session.query(TapIdAssociation).filter_by( + tap_service_id=None).first() + + if query: + query.update({"tap_service_id": tap_service_id}) + return query + # not found + raise taas.TapServiceLimitReached() + def create_tap_id_association(self, context, tap_service_id): LOG.debug("create_tap_id_association() called") # create the TapIdAssociation object with context.session.begin(subtransactions=True): - tap_id_association_db = TapIdAssociation( - tap_service_id=tap_service_id - ) - context.session.add(tap_id_association_db) + # allocate Taas id. + # if conflict happened, it will raise db.DBDuplicateEntry. + # this will be retry request again in neutron controller framework. + # so we just make sure TapIdAssociation field taas_id is unique + tap_id_association_db = self._allocate_taas_id_with_tap_service_id( + context, tap_service_id) return self._make_tap_id_association_dict(tap_id_association_db) diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/_i18n.py neutron-taas-2.0.0/neutron_taas/_i18n.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/_i18n.py 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/_i18n.py 2017-08-10 19:02:30.000000000 +0000 @@ -26,16 +26,6 @@ # The plural translation function using the name "_P" _P = _translators.plural_form -# Translators for log levels. -# -# The abbreviated names are meant to reflect the usual use of a short -# name like '_'. The "L" is for "log" and the other letter comes from -# the level. -_LI = _translators.log_info -_LW = _translators.log_warning -_LE = _translators.log_error -_LC = _translators.log_critical - def get_available_languages(): return oslo_i18n.get_available_languages(DOMAIN) diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/__init__.py neutron-taas-2.0.0/neutron_taas/__init__.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/__init__.py 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/__init__.py 2017-08-10 19:02:30.000000000 +0000 @@ -1,23 +0,0 @@ -# Copyright 2011 OpenStack Foundation -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import gettext -import six - - -if six.PY2: - gettext.install('neutron', unicode=1) -else: - gettext.install('neutron') diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/agents/extensions/taas.py neutron-taas-2.0.0/neutron_taas/services/taas/agents/extensions/taas.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/agents/extensions/taas.py 1970-01-01 00:00:00.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/services/taas/agents/extensions/taas.py 2017-08-10 19:02:30.000000000 +0000 @@ -0,0 +1,91 @@ +# Copyright 2017 FUJITSU LABORATORIES LTD. +# Copyright 2016 NEC Technologies India Pvt. Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import abc +import six + +from neutron.agent.l2 import l2_agent_extension + +from neutron_taas.services.taas.agents.ovs import taas_ovs_agent + +from oslo_config import cfg +from oslo_log import log as logging + +LOG = logging.getLogger(__name__) + + +OPTS = [ + cfg.IntOpt( + 'taas_agent_periodic_interval', + default=5, + help=_('Seconds between periodic task runs') + ) +] +cfg.CONF.register_opts(OPTS) + + +@six.add_metaclass(abc.ABCMeta) +class TaasAgentDriver(object): + """Defines stable abstract interface for TaaS Agent Driver.""" + + @abc.abstractmethod + def initialize(self): + """Perform Taas agent driver initialization.""" + + def consume_api(self, agent_api): + """Consume the AgentAPI instance from the TaasAgentExtension class + + :param agent_api: An instance of an agent specific API + """ + + @abc.abstractmethod + def create_tap_service(self, tap_service): + """Create a Tap Service request in driver.""" + + @abc.abstractmethod + def create_tap_flow(self, tap_flow): + """Create a tap flow request in driver.""" + + @abc.abstractmethod + def delete_tap_service(self, tap_service): + """delete a Tap Service request in driver.""" + + @abc.abstractmethod + def delete_tap_flow(self, tap_flow): + """Delete a tap flow request in driver.""" + + +class TaasAgentExtension(l2_agent_extension.L2AgentExtension): + + def initialize(self, connection, driver_type): + """Initialize agent extension.""" + self.taas_agent = taas_ovs_agent.TaasOvsAgentRpcCallback( + cfg.CONF, driver_type) + self.taas_agent.consume_api(self.agent_api) + self.taas_agent.initialize() + + def consume_api(self, agent_api): + """Receive neutron agent API object + + Allows an extension to gain access to resources internal to the + neutron agent and otherwise unavailable to the extension. + """ + self.agent_api = agent_api + + def handle_port(self, context, port): + pass + + def delete_port(self, context, port): + pass diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/agents/ovs/agent.py neutron-taas-2.0.0/neutron_taas/services/taas/agents/ovs/agent.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/agents/ovs/agent.py 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/services/taas/agents/ovs/agent.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,75 +0,0 @@ -# Copyright (C) 2015 Ericsson AB -# Copyright (c) 2015 Gigamon -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import sys - -import eventlet -eventlet.monkey_patch() - -from oslo_config import cfg -from oslo_service import service - -from neutron.agent.common import config -from neutron.common import config as common_config -from neutron.common import rpc as n_rpc - -from neutron_taas._i18n import _ -from neutron_taas.common import topics -from neutron_taas.services.taas.agents.ovs import taas_ovs_agent - - -OPTS = [ - cfg.IntOpt( - 'taas_agent_periodic_interval', - default=5, - help=_('Seconds between periodic task runs') - ) -] - - -class TaaSOVSAgentService(n_rpc.Service): - def start(self): - super(TaaSOVSAgentService, self).start() - self.tg.add_timer( - cfg.CONF.taas_agent_periodic_interval, - self.manager.periodic_tasks, - None, - None - ) - - -def main(): - # Load the configuration parameters. - cfg.CONF.register_opts(OPTS) - config.register_root_helper(cfg.CONF) - common_config.init(sys.argv[1:]) - config.setup_logging() - - # Set up RPC - mgr = taas_ovs_agent.TaasOvsAgentRpcCallback(cfg.CONF) - endpoints = [mgr] - conn = n_rpc.create_connection() - conn.create_consumer(topics.TAAS_AGENT, endpoints, fanout=False) - conn.consume_in_threads() - - svc = TaaSOVSAgentService( - host=cfg.CONF.host, - topic=topics.TAAS_PLUGIN, - manager=mgr - ) - service.launch(cfg.CONF, svc).wait() - -if __name__ == '__main__': - main() diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/agents/ovs/taas_ovs_agent.py neutron-taas-2.0.0/neutron_taas/services/taas/agents/ovs/taas_ovs_agent.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/agents/ovs/taas_ovs_agent.py 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/services/taas/agents/ovs/taas_ovs_agent.py 2017-08-10 19:02:30.000000000 +0000 @@ -14,15 +14,15 @@ # under the License. -from neutron.agent.common import config +from neutron.common import rpc as n_rpc +from neutron import manager -from neutron_taas._i18n import _ from neutron_taas.common import topics from neutron_taas.services.taas.agents import taas_agent_api as api from oslo_config import cfg from oslo_log import log as logging -from oslo_utils import importutils +from oslo_service import service LOG = logging.getLogger(__name__) @@ -37,30 +37,25 @@ class TaasOvsAgentRpcCallback(api.TaasAgentRpcCallbackMixin): - def __init__(self, conf): - + def __init__(self, conf, driver_type): LOG.debug("TaaS OVS Agent initialize called") self.conf = conf - taas_driver_class_path = cfg.CONF.taas.driver - self.taas_enabled = cfg.CONF.taas.enabled + self.driver_type = driver_type - self.root_helper = config.get_root_helper(conf) + super(TaasOvsAgentRpcCallback, self).__init__() - try: - self.taas_driver = importutils.import_object( - taas_driver_class_path, self.root_helper) - LOG.debug("TaaS Driver Loaded: '%s'", taas_driver_class_path) - except ImportError: - msg = _('Error importing TaaS device driver: %s') - raise ImportError(msg % taas_driver_class_path) + def initialize(self): + self.taas_driver = manager.NeutronManager.load_class_for_provider( + 'neutron_taas.taas.agent_drivers', self.driver_type)() + self.taas_driver.consume_api(self.agent_api) + self.taas_driver.initialize() - # setup RPC to msg taas plugin - self.taas_plugin_rpc = TaasOvsPluginApi(topics.TAAS_PLUGIN, - conf.host) - super(TaasOvsAgentRpcCallback, self).__init__() + self._taas_rpc_setup() + TaasOvsAgentService(self).start() - return + def consume_api(self, agent_api): + self.agent_api = agent_api def _invoke_driver_for_plugin_api(self, context, args, func_name): LOG.debug("Invoking Driver for %(func_name)s from agent", @@ -117,10 +112,33 @@ tap_flow_msg, 'delete_tap_flow') - def periodic_tasks(self, argv): + def _taas_rpc_setup(self): + # setup RPC to msg taas plugin + self.taas_plugin_rpc = TaasOvsPluginApi( + topics.TAAS_PLUGIN, self.conf.host) + + endpoints = [self] + conn = n_rpc.create_connection() + conn.create_consumer(topics.TAAS_AGENT, endpoints, fanout=False) + conn.consume_in_threads() + + def periodic_tasks(self): # # Regenerate the flow in br-tun's TAAS_SEND_FLOOD table # to ensure all existing tunnel ports are included. # self.taas_driver.update_tunnel_flood_flow() - pass + + +class TaasOvsAgentService(service.Service): + def __init__(self, driver): + super(TaasOvsAgentService, self).__init__() + self.driver = driver + + def start(self): + super(TaasOvsAgentService, self).start() + self.tg.add_timer( + int(cfg.CONF.taas_agent_periodic_interval), + self.driver.periodic_tasks, + None + ) diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/agents/taas_agent_api.py neutron-taas-2.0.0/neutron_taas/services/taas/agents/taas_agent_api.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/agents/taas_agent_api.py 2017-05-22 09:22:02.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/services/taas/agents/taas_agent_api.py 2017-08-10 19:02:30.000000000 +0000 @@ -13,6 +13,7 @@ # License for the specific language governing permissions and limitations # under the License. +from neutron_taas._i18n import _ from oslo_config import cfg import oslo_messaging as messaging @@ -49,6 +50,14 @@ def __init__(self): super(TaasAgentRpcCallbackMixin, self).__init__() + def consume_api(self, agent_api): + """Receive neutron agent API object + + Allows an extension to gain access to resources internal to the + neutron agent and otherwise unavailable to the extension. + """ + self.agent_api = agent_api + def create_tap_service(self, context, tap_service, host): """Handle RPC cast from plugin to create a tap service.""" pass diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/drivers/linux/ovs_taas.py neutron-taas-2.0.0/neutron_taas/services/taas/drivers/linux/ovs_taas.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/drivers/linux/ovs_taas.py 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/services/taas/drivers/linux/ovs_taas.py 2017-08-10 19:02:30.000000000 +0000 @@ -16,10 +16,12 @@ from neutron.agent.common import ovs_lib from neutron.agent.linux import utils +from neutron.conf.agent import common # from neutron.plugins.openvswitch.common import constants as ovs_consts from neutron.plugins.ml2.drivers.openvswitch.agent.common import constants \ as ovs_consts -from neutron_taas.services.taas.drivers import taas_base +from neutron_taas.services.taas.agents.extensions import taas as taas_base +from oslo_config import cfg from oslo_log import log as logging import ovs_constants as taas_ovs_consts import ovs_utils as taas_ovs_utils @@ -34,14 +36,16 @@ super(OVSBridge_tap_extension, self).__init__(br_name) -class OvsTaasDriver(taas_base.TaasDriverBase): - def __init__(self, root_helper): +class OvsTaasDriver(taas_base.TaasAgentDriver): + def __init__(self): + super(OvsTaasDriver, self).__init__() LOG.debug("Initializing Taas OVS Driver") + self.agent_api = None + self.root_helper = common.get_root_helper(cfg.CONF) - self.root_helper = root_helper - - self.int_br = OVSBridge_tap_extension('br-int', self.root_helper) - self.tun_br = OVSBridge_tap_extension('br-tun', self.root_helper) + def initialize(self): + self.int_br = self.agent_api.request_int_br() + self.tun_br = self.agent_api.request_tun_br() self.tap_br = OVSBridge_tap_extension('br-tap', self.root_helper) # Prepare OVS bridges for TaaS @@ -185,6 +189,9 @@ return + def consume_api(self, agent_api): + self.agent_api = agent_api + def create_tap_service(self, tap_service): taas_id = tap_service['taas_id'] port = tap_service['port'] diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/drivers/taas_base.py neutron-taas-2.0.0/neutron_taas/services/taas/drivers/taas_base.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/drivers/taas_base.py 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/services/taas/drivers/taas_base.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,37 +0,0 @@ -# Copyright (C) 2015 Ericsson AB -# Copyright (c) 2015 Gigamon -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import abc - -import six - - -@six.add_metaclass(abc.ABCMeta) -class TaasDriverBase(object): - @abc.abstractmethod - def create_tap_service(self, tap_service): - pass - - @abc.abstractmethod - def delete_tap_service(self, tap_service): - pass - - @abc.abstractmethod - def create_tap_flow(self, tap_flow): - pass - - @abc.abstractmethod - def delete_tap_flow(self, tap_flow): - pass diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/service_drivers/taas_rpc.py neutron-taas-2.0.0/neutron_taas/services/taas/service_drivers/taas_rpc.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/services/taas/service_drivers/taas_rpc.py 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/services/taas/service_drivers/taas_rpc.py 2017-08-10 19:02:30.000000000 +0000 @@ -17,7 +17,6 @@ from neutron.common import rpc as n_rpc from neutron_lib import exceptions as n_exc from neutron_taas.common import topics -from neutron_taas.extensions import taas as taas_ex from neutron_taas.services.taas import service_drivers from neutron_taas.services.taas.service_drivers import taas_agent_api @@ -52,8 +51,7 @@ tf['tap_service_id']) taas_id = (self.service_plugin.get_tap_id_association( context, - tap_service_id=ts['id'])['taas_id'] + - cfg.CONF.taas.vlan_range_start) + tap_service_id=ts['id']))['taas_id'] return taas_id def create_tap_service_precommit(self, context): @@ -71,15 +69,11 @@ # Get taas id associated with the Tap Service ts = context.tap_service tap_id_association = context.tap_id_association - taas_vlan_id = (tap_id_association['taas_id'] + - cfg.CONF.taas.vlan_range_start) + taas_vlan_id = tap_id_association['taas_id'] port = self.service_plugin._get_port_details(context._plugin_context, ts['port_id']) host = port['binding:host_id'] - if taas_vlan_id > cfg.CONF.taas.vlan_range_end: - raise taas_ex.TapServiceLimitReached() - rpc_msg = {'tap_service': ts, 'taas_id': taas_vlan_id, 'port': port} @@ -99,8 +93,8 @@ """ ts = context.tap_service tap_id_association = context.tap_id_association - taas_vlan_id = (tap_id_association['taas_id'] + - cfg.CONF.taas.vlan_range_start) + taas_vlan_id = tap_id_association['taas_id'] + try: port = self.service_plugin._get_port_details( context._plugin_context, diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/tests/tempest_plugin/tests/scenario/base.py neutron-taas-2.0.0/neutron_taas/tests/tempest_plugin/tests/scenario/base.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/tests/tempest_plugin/tests/scenario/base.py 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/tests/tempest_plugin/tests/scenario/base.py 2017-08-10 19:02:30.000000000 +0000 @@ -13,7 +13,7 @@ # License for the specific language governing permissions and limitations # under the License. -from tempest.scenario import manager +from neutron_taas.tests.tempest_plugin.tests.scenario import manager class TaaSScenarioTest(manager.NetworkScenarioTest): diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/tests/tempest_plugin/tests/scenario/manager.py neutron-taas-2.0.0/neutron_taas/tests/tempest_plugin/tests/scenario/manager.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/tests/tempest_plugin/tests/scenario/manager.py 1970-01-01 00:00:00.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/tests/tempest_plugin/tests/scenario/manager.py 2017-08-10 19:02:30.000000000 +0000 @@ -0,0 +1,1350 @@ +# Copyright 2012 OpenStack Foundation +# Copyright 2013 IBM Corp. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import subprocess + +import netaddr +from oslo_log import log +from oslo_serialization import jsonutils as json +from oslo_utils import netutils + +from tempest.common import compute +from tempest.common import image as common_image +from tempest.common.utils.linux import remote_client +from tempest.common.utils import net_utils +from tempest.common import waiters +from tempest import config +from tempest import exceptions +from tempest.lib.common.utils import data_utils +from tempest.lib.common.utils import test_utils +from tempest.lib import exceptions as lib_exc +import tempest.test + +CONF = config.CONF + +LOG = log.getLogger(__name__) + + +class ScenarioTest(tempest.test.BaseTestCase): + """Base class for scenario tests. Uses tempest own clients. """ + + credentials = ['primary'] + + @classmethod + def setup_clients(cls): + super(ScenarioTest, cls).setup_clients() + # Clients (in alphabetical order) + cls.flavors_client = cls.manager.flavors_client + cls.compute_floating_ips_client = ( + cls.manager.compute_floating_ips_client) + if CONF.service_available.glance: + # Check if glance v1 is available to determine which client to use. + if CONF.image_feature_enabled.api_v1: + cls.image_client = cls.manager.image_client + elif CONF.image_feature_enabled.api_v2: + cls.image_client = cls.manager.image_client_v2 + else: + raise lib_exc.InvalidConfiguration( + 'Either api_v1 or api_v2 must be True in ' + '[image-feature-enabled].') + # Compute image client + cls.compute_images_client = cls.manager.compute_images_client + cls.keypairs_client = cls.manager.keypairs_client + # Nova security groups client + cls.compute_security_groups_client = ( + cls.manager.compute_security_groups_client) + cls.compute_security_group_rules_client = ( + cls.manager.compute_security_group_rules_client) + cls.servers_client = cls.manager.servers_client + cls.interface_client = cls.manager.interfaces_client + # Neutron network client + cls.networks_client = cls.manager.networks_client + cls.ports_client = cls.manager.ports_client + cls.routers_client = cls.manager.routers_client + cls.subnets_client = cls.manager.subnets_client + cls.floating_ips_client = cls.manager.floating_ips_client + cls.security_groups_client = cls.manager.security_groups_client + cls.security_group_rules_client = ( + cls.manager.security_group_rules_client) + + if CONF.volume_feature_enabled.api_v2: + cls.volumes_client = cls.manager.volumes_v2_client + cls.snapshots_client = cls.manager.snapshots_v2_client + else: + cls.volumes_client = cls.manager.volumes_client + cls.snapshots_client = cls.manager.snapshots_client + + # ## Test functions library + # + # The create_[resource] functions only return body and discard the + # resp part which is not used in scenario tests + + def _create_port(self, network_id, client=None, namestart='port-quotatest', + **kwargs): + if not client: + client = self.ports_client + name = data_utils.rand_name(namestart) + result = client.create_port( + name=name, + network_id=network_id, + **kwargs) + self.assertIsNotNone(result, 'Unable to allocate port') + port = result['port'] + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + client.delete_port, port['id']) + return port + + def create_keypair(self, client=None): + if not client: + client = self.keypairs_client + name = data_utils.rand_name(self.__class__.__name__) + # We don't need to create a keypair by pubkey in scenario + body = client.create_keypair(name=name) + self.addCleanup(client.delete_keypair, name) + return body['keypair'] + + def create_server(self, name=None, image_id=None, flavor=None, + validatable=False, wait_until='ACTIVE', + clients=None, **kwargs): + """Wrapper utility that returns a test server. + + This wrapper utility calls the common create test server and + returns a test server. The purpose of this wrapper is to minimize + the impact on the code of the tests already using this + function. + """ + + # NOTE(jlanoux): As a first step, ssh checks in the scenario + # tests need to be run regardless of the run_validation and + # validatable parameters and thus until the ssh validation job + # becomes voting in CI. The test resources management and IP + # association are taken care of in the scenario tests. + # Therefore, the validatable parameter is set to false in all + # those tests. In this way create_server just return a standard + # server and the scenario tests always perform ssh checks. + + # Needed for the cross_tenant_traffic test: + if clients is None: + clients = self.manager + + if name is None: + name = data_utils.rand_name(self.__class__.__name__ + "-server") + + vnic_type = CONF.network.port_vnic_type + + # If vnic_type is configured create port for + # every network + if vnic_type: + ports = [] + + create_port_body = {'binding:vnic_type': vnic_type, + 'namestart': 'port-smoke'} + if kwargs: + # Convert security group names to security group ids + # to pass to create_port + if 'security_groups' in kwargs: + security_groups = \ + clients.security_groups_client.list_security_groups( + ).get('security_groups') + sec_dict = dict([(s['name'], s['id']) + for s in security_groups]) + + sec_groups_names = [s['name'] for s in kwargs.pop( + 'security_groups')] + security_groups_ids = [sec_dict[s] + for s in sec_groups_names] + + if security_groups_ids: + create_port_body[ + 'security_groups'] = security_groups_ids + networks = kwargs.pop('networks', []) + else: + networks = [] + + # If there are no networks passed to us we look up + # for the project's private networks and create a port. + # The same behaviour as we would expect when passing + # the call to the clients with no networks + if not networks: + networks = clients.networks_client.list_networks( + **{'router:external': False, 'fields': 'id'})['networks'] + + # It's net['uuid'] if networks come from kwargs + # and net['id'] if they come from + # clients.networks_client.list_networks + for net in networks: + net_id = net.get('uuid', net.get('id')) + if 'port' not in net: + port = self._create_port(network_id=net_id, + client=clients.ports_client, + **create_port_body) + ports.append({'port': port['id']}) + else: + ports.append({'port': net['port']}) + if ports: + kwargs['networks'] = ports + self.ports = ports + + tenant_network = self.get_tenant_network() + + body, servers = compute.create_test_server( + clients, + tenant_network=tenant_network, + wait_until=wait_until, + name=name, flavor=flavor, + image_id=image_id, **kwargs) + + self.addCleanup(waiters.wait_for_server_termination, + clients.servers_client, body['id']) + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + clients.servers_client.delete_server, body['id']) + server = clients.servers_client.show_server(body['id'])['server'] + return server + + def create_volume(self, size=None, name=None, snapshot_id=None, + imageRef=None, volume_type=None): + if size is None: + size = CONF.volume.volume_size + if imageRef: + image = self.compute_images_client.show_image(imageRef)['image'] + min_disk = image.get('minDisk') + size = max(size, min_disk) + if name is None: + name = data_utils.rand_name(self.__class__.__name__ + "-volume") + kwargs = {'display_name': name, + 'snapshot_id': snapshot_id, + 'imageRef': imageRef, + 'volume_type': volume_type, + 'size': size} + volume = self.volumes_client.create_volume(**kwargs)['volume'] + + self.addCleanup(self.volumes_client.wait_for_resource_deletion, + volume['id']) + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + self.volumes_client.delete_volume, volume['id']) + + # NOTE(e0ne): Cinder API v2 uses name instead of display_name + if 'display_name' in volume: + self.assertEqual(name, volume['display_name']) + else: + self.assertEqual(name, volume['name']) + waiters.wait_for_volume_resource_status(self.volumes_client, + volume['id'], 'available') + # The volume retrieved on creation has a non-up-to-date status. + # Retrieval after it becomes active ensures correct details. + volume = self.volumes_client.show_volume(volume['id'])['volume'] + return volume + + def create_volume_type(self, client=None, name=None, backend_name=None): + if not client: + client = self.admin_volume_types_client + if not name: + class_name = self.__class__.__name__ + name = data_utils.rand_name(class_name + '-volume-type') + randomized_name = data_utils.rand_name('scenario-type-' + name) + + LOG.debug("Creating a volume type: %s on backend %s", + randomized_name, backend_name) + extra_specs = {} + if backend_name: + extra_specs = {"volume_backend_name": backend_name} + + body = client.create_volume_type(name=randomized_name, + extra_specs=extra_specs) + volume_type = body['volume_type'] + self.assertIn('id', volume_type) + self.addCleanup(client.delete_volume_type, volume_type['id']) + return volume_type + + def _create_loginable_secgroup_rule(self, secgroup_id=None): + _client = self.compute_security_groups_client + _client_rules = self.compute_security_group_rules_client + if secgroup_id is None: + sgs = _client.list_security_groups()['security_groups'] + for sg in sgs: + if sg['name'] == 'default': + secgroup_id = sg['id'] + + # These rules are intended to permit inbound ssh and icmp + # traffic from all sources, so no group_id is provided. + # Setting a group_id would only permit traffic from ports + # belonging to the same security group. + rulesets = [ + { + # ssh + 'ip_protocol': 'tcp', + 'from_port': 22, + 'to_port': 22, + 'cidr': '0.0.0.0/0', + }, + { + # ping + 'ip_protocol': 'icmp', + 'from_port': -1, + 'to_port': -1, + 'cidr': '0.0.0.0/0', + } + ] + rules = list() + for ruleset in rulesets: + sg_rule = _client_rules.create_security_group_rule( + parent_group_id=secgroup_id, **ruleset)['security_group_rule'] + rules.append(sg_rule) + return rules + + def _create_security_group(self): + # Create security group + sg_name = data_utils.rand_name(self.__class__.__name__) + sg_desc = sg_name + " description" + secgroup = self.compute_security_groups_client.create_security_group( + name=sg_name, description=sg_desc)['security_group'] + self.assertEqual(secgroup['name'], sg_name) + self.assertEqual(secgroup['description'], sg_desc) + self.addCleanup( + test_utils.call_and_ignore_notfound_exc, + self.compute_security_groups_client.delete_security_group, + secgroup['id']) + + # Add rules to the security group + self._create_loginable_secgroup_rule(secgroup['id']) + + return secgroup + + def get_remote_client(self, ip_address, username=None, private_key=None): + """Get a SSH client to a remote server + + @param ip_address the server floating or fixed IP address to use + for ssh validation + @param username name of the Linux account on the remote server + @param private_key the SSH private key to use + @return a RemoteClient object + """ + + if username is None: + username = CONF.validation.image_ssh_user + # Set this with 'keypair' or others to log in with keypair or + # username/password. + if CONF.validation.auth_method == 'keypair': + password = None + if private_key is None: + private_key = self.keypair['private_key'] + else: + password = CONF.validation.image_ssh_password + private_key = None + linux_client = remote_client.RemoteClient(ip_address, username, + pkey=private_key, + password=password) + try: + linux_client.validate_authentication() + except Exception as e: + message = ('Initializing SSH connection to %(ip)s failed. ' + 'Error: %(error)s' % {'ip': ip_address, + 'error': e}) + caller = test_utils.find_test_caller() + if caller: + message = '(%s) %s' % (caller, message) + LOG.exception(message) + self._log_console_output() + raise + + return linux_client + + def _image_create(self, name, fmt, path, + disk_format=None, properties=None): + if properties is None: + properties = {} + name = data_utils.rand_name('%s-' % name) + params = { + 'name': name, + 'container_format': fmt, + 'disk_format': disk_format or fmt, + } + if CONF.image_feature_enabled.api_v1: + params['is_public'] = 'False' + params['properties'] = properties + params = {'headers': common_image.image_meta_to_headers(**params)} + else: + params['visibility'] = 'private' + # Additional properties are flattened out in the v2 API. + params.update(properties) + body = self.image_client.create_image(**params) + image = body['image'] if 'image' in body else body + self.addCleanup(self.image_client.delete_image, image['id']) + self.assertEqual("queued", image['status']) + with open(path, 'rb') as image_file: + if CONF.image_feature_enabled.api_v1: + self.image_client.update_image(image['id'], data=image_file) + else: + self.image_client.store_image_file(image['id'], image_file) + return image['id'] + + def glance_image_create(self): + img_path = CONF.scenario.img_dir + "/" + CONF.scenario.img_file + aki_img_path = CONF.scenario.img_dir + "/" + CONF.scenario.aki_img_file + ari_img_path = CONF.scenario.img_dir + "/" + CONF.scenario.ari_img_file + ami_img_path = CONF.scenario.img_dir + "/" + CONF.scenario.ami_img_file + img_container_format = CONF.scenario.img_container_format + img_disk_format = CONF.scenario.img_disk_format + img_properties = CONF.scenario.img_properties + LOG.debug("paths: img: %s, container_format: %s, disk_format: %s, " + "properties: %s, ami: %s, ari: %s, aki: %s", + img_path, img_container_format, img_disk_format, + img_properties, ami_img_path, ari_img_path, aki_img_path) + try: + image = self._image_create('scenario-img', + img_container_format, + img_path, + disk_format=img_disk_format, + properties=img_properties) + except IOError: + LOG.debug("A qcow2 image was not found. Try to get a uec image.") + kernel = self._image_create('scenario-aki', 'aki', aki_img_path) + ramdisk = self._image_create('scenario-ari', 'ari', ari_img_path) + properties = {'kernel_id': kernel, 'ramdisk_id': ramdisk} + image = self._image_create('scenario-ami', 'ami', + path=ami_img_path, + properties=properties) + LOG.debug("image:%s", image) + + return image + + def _log_console_output(self, servers=None): + if not CONF.compute_feature_enabled.console_output: + LOG.debug('Console output not supported, cannot log') + return + if not servers: + servers = self.servers_client.list_servers() + servers = servers['servers'] + for server in servers: + try: + console_output = self.servers_client.get_console_output( + server['id'])['output'] + LOG.debug('Console output for %s\nbody=\n%s', + server['id'], console_output) + except lib_exc.NotFound: + LOG.debug("Server %s disappeared(deleted) while looking " + "for the console log", server['id']) + + def _log_net_info(self, exc): + # network debug is called as part of ssh init + if not isinstance(exc, lib_exc.SSHTimeout): + LOG.debug('Network information on a devstack host') + + def create_server_snapshot(self, server, name=None): + # Glance client + _image_client = self.image_client + # Compute client + _images_client = self.compute_images_client + if name is None: + name = data_utils.rand_name(self.__class__.__name__ + 'snapshot') + LOG.debug("Creating a snapshot image for server: %s", server['name']) + image = _images_client.create_image(server['id'], name=name) + image_id = image.response['location'].split('images/')[1] + waiters.wait_for_image_status(_image_client, image_id, 'active') + + self.addCleanup(_image_client.wait_for_resource_deletion, + image_id) + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + _image_client.delete_image, image_id) + + if CONF.image_feature_enabled.api_v1: + # In glance v1 the additional properties are stored in the headers. + resp = _image_client.check_image(image_id) + snapshot_image = common_image.get_image_meta_from_headers(resp) + image_props = snapshot_image.get('properties', {}) + else: + # In glance v2 the additional properties are flattened. + snapshot_image = _image_client.show_image(image_id) + image_props = snapshot_image + + bdm = image_props.get('block_device_mapping') + if bdm: + bdm = json.loads(bdm) + if bdm and 'snapshot_id' in bdm[0]: + snapshot_id = bdm[0]['snapshot_id'] + self.addCleanup( + self.snapshots_client.wait_for_resource_deletion, + snapshot_id) + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + self.snapshots_client.delete_snapshot, + snapshot_id) + waiters.wait_for_volume_resource_status(self.snapshots_client, + snapshot_id, + 'available') + image_name = snapshot_image['name'] + self.assertEqual(name, image_name) + LOG.debug("Created snapshot image %s for server %s", + image_name, server['name']) + return snapshot_image + + def nova_volume_attach(self, server, volume_to_attach): + volume = self.servers_client.attach_volume( + server['id'], volumeId=volume_to_attach['id'], device='/dev/%s' + % CONF.compute.volume_device_name)['volumeAttachment'] + self.assertEqual(volume_to_attach['id'], volume['id']) + waiters.wait_for_volume_resource_status(self.volumes_client, + volume['id'], 'in-use') + + # Return the updated volume after the attachment + return self.volumes_client.show_volume(volume['id'])['volume'] + + def nova_volume_detach(self, server, volume): + self.servers_client.detach_volume(server['id'], volume['id']) + waiters.wait_for_volume_resource_status(self.volumes_client, + volume['id'], 'available') + + volume = self.volumes_client.show_volume(volume['id'])['volume'] + self.assertEqual('available', volume['status']) + + def rebuild_server(self, server_id, image=None, + preserve_ephemeral=False, wait=True, + rebuild_kwargs=None): + if image is None: + image = CONF.compute.image_ref + + rebuild_kwargs = rebuild_kwargs or {} + + LOG.debug("Rebuilding server (id: %s, image: %s, preserve eph: %s)", + server_id, image, preserve_ephemeral) + self.servers_client.rebuild_server( + server_id=server_id, image_ref=image, + preserve_ephemeral=preserve_ephemeral, + **rebuild_kwargs) + if wait: + waiters.wait_for_server_status(self.servers_client, + server_id, 'ACTIVE') + + def ping_ip_address(self, ip_address, should_succeed=True, + ping_timeout=None, mtu=None): + timeout = ping_timeout or CONF.validation.ping_timeout + cmd = ['ping', '-c1', '-w1'] + + if mtu: + cmd += [ + # don't fragment + '-M', 'do', + # ping receives just the size of ICMP payload + '-s', str(net_utils.get_ping_payload_size(mtu, 4)) + ] + cmd.append(ip_address) + + def ping(): + proc = subprocess.Popen(cmd, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE) + proc.communicate() + + return (proc.returncode == 0) == should_succeed + + caller = test_utils.find_test_caller() + LOG.debug('%(caller)s begins to ping %(ip)s in %(timeout)s sec and the' + ' expected result is %(should_succeed)s', { + 'caller': caller, 'ip': ip_address, 'timeout': timeout, + 'should_succeed': + 'reachable' if should_succeed else 'unreachable' + }) + result = test_utils.call_until_true(ping, timeout, 1) + LOG.debug('%(caller)s finishes ping %(ip)s in %(timeout)s sec and the ' + 'ping result is %(result)s', { + 'caller': caller, 'ip': ip_address, 'timeout': timeout, + 'result': 'expected' if result else 'unexpected' + }) + return result + + def check_vm_connectivity(self, ip_address, + username=None, + private_key=None, + should_connect=True, + mtu=None): + """Check server connectivity + + :param ip_address: server to test against + :param username: server's ssh username + :param private_key: server's ssh private key to be used + :param should_connect: True/False indicates positive/negative test + positive - attempt ping and ssh + negative - attempt ping and fail if succeed + :param mtu: network MTU to use for connectivity validation + + :raises: AssertError if the result of the connectivity check does + not match the value of the should_connect param + """ + if should_connect: + msg = "Timed out waiting for %s to become reachable" % ip_address + else: + msg = "ip address %s is reachable" % ip_address + self.assertTrue(self.ping_ip_address(ip_address, + should_succeed=should_connect, + mtu=mtu), + msg=msg) + if should_connect: + # no need to check ssh for negative connectivity + self.get_remote_client(ip_address, username, private_key) + + def check_public_network_connectivity(self, ip_address, username, + private_key, should_connect=True, + msg=None, servers=None, mtu=None): + # The target login is assumed to have been configured for + # key-based authentication by cloud-init. + LOG.debug('checking network connections to IP %s with user: %s', + ip_address, username) + try: + self.check_vm_connectivity(ip_address, + username, + private_key, + should_connect=should_connect, + mtu=mtu) + except Exception: + ex_msg = 'Public network connectivity check failed' + if msg: + ex_msg += ": " + msg + LOG.exception(ex_msg) + self._log_console_output(servers) + raise + + def create_floating_ip(self, thing, pool_name=None): + """Create a floating IP and associates to a server on Nova""" + + if not pool_name: + pool_name = CONF.network.floating_network_name + floating_ip = (self.compute_floating_ips_client. + create_floating_ip(pool=pool_name)['floating_ip']) + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + self.compute_floating_ips_client.delete_floating_ip, + floating_ip['id']) + self.compute_floating_ips_client.associate_floating_ip_to_server( + floating_ip['ip'], thing['id']) + return floating_ip + + def create_timestamp(self, ip_address, dev_name=None, mount_path='/mnt', + private_key=None): + ssh_client = self.get_remote_client(ip_address, + private_key=private_key) + if dev_name is not None: + ssh_client.make_fs(dev_name) + ssh_client.mount(dev_name, mount_path) + cmd_timestamp = 'sudo sh -c "date > %s/timestamp; sync"' % mount_path + ssh_client.exec_command(cmd_timestamp) + timestamp = ssh_client.exec_command('sudo cat %s/timestamp' + % mount_path) + if dev_name is not None: + ssh_client.umount(mount_path) + return timestamp + + def get_timestamp(self, ip_address, dev_name=None, mount_path='/mnt', + private_key=None): + ssh_client = self.get_remote_client(ip_address, + private_key=private_key) + if dev_name is not None: + ssh_client.mount(dev_name, mount_path) + timestamp = ssh_client.exec_command('sudo cat %s/timestamp' + % mount_path) + if dev_name is not None: + ssh_client.umount(mount_path) + return timestamp + + def get_server_ip(self, server): + """Get the server fixed or floating IP. + + Based on the configuration we're in, return a correct ip + address for validating that a guest is up. + """ + if CONF.validation.connect_method == 'floating': + # The tests calling this method don't have a floating IP + # and can't make use of the validation resources. So the + # method is creating the floating IP there. + return self.create_floating_ip(server)['ip'] + elif CONF.validation.connect_method == 'fixed': + # Determine the network name to look for based on config or creds + # provider network resources. + if CONF.validation.network_for_ssh: + addresses = server['addresses'][ + CONF.validation.network_for_ssh] + else: + creds_provider = self._get_credentials_provider() + net_creds = creds_provider.get_primary_creds() + network = getattr(net_creds, 'network', None) + addresses = (server['addresses'][network['name']] + if network else []) + for address in addresses: + if (address['version'] == CONF.validation.ip_version_for_ssh + and address['OS-EXT-IPS:type'] == 'fixed'): + return address['addr'] + raise exceptions.ServerUnreachable(server_id=server['id']) + else: + raise lib_exc.InvalidConfiguration() + + +class NetworkScenarioTest(ScenarioTest): + """Base class for network scenario tests. + + This class provide helpers for network scenario tests, using the neutron + API. Helpers from ancestor which use the nova network API are overridden + with the neutron API. + + This Class also enforces using Neutron instead of novanetwork. + Subclassed tests will be skipped if Neutron is not enabled + + """ + + credentials = ['primary', 'admin'] + + @classmethod + def skip_checks(cls): + super(NetworkScenarioTest, cls).skip_checks() + if not CONF.service_available.neutron: + raise cls.skipException('Neutron not available') + + def _create_network(self, networks_client=None, + tenant_id=None, + namestart='network-smoke-', + port_security_enabled=True): + if not networks_client: + networks_client = self.networks_client + if not tenant_id: + tenant_id = networks_client.tenant_id + name = data_utils.rand_name(namestart) + network_kwargs = dict(name=name, tenant_id=tenant_id) + # Neutron disables port security by default so we have to check the + # config before trying to create the network with port_security_enabled + if CONF.network_feature_enabled.port_security: + network_kwargs['port_security_enabled'] = port_security_enabled + result = networks_client.create_network(**network_kwargs) + network = result['network'] + + self.assertEqual(network['name'], name) + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + networks_client.delete_network, + network['id']) + return network + + def _create_subnet(self, network, subnets_client=None, + routers_client=None, namestart='subnet-smoke', + **kwargs): + """Create a subnet for the given network + + within the cidr block configured for tenant networks. + """ + if not subnets_client: + subnets_client = self.subnets_client + if not routers_client: + routers_client = self.routers_client + + def cidr_in_use(cidr, tenant_id): + """Check cidr existence + + :returns: True if subnet with cidr already exist in tenant + False else + """ + cidr_in_use = self.admin_manager.subnets_client.list_subnets( + tenant_id=tenant_id, cidr=cidr)['subnets'] + return len(cidr_in_use) != 0 + + ip_version = kwargs.pop('ip_version', 4) + + if ip_version == 6: + tenant_cidr = netaddr.IPNetwork( + CONF.network.project_network_v6_cidr) + num_bits = CONF.network.project_network_v6_mask_bits + else: + tenant_cidr = netaddr.IPNetwork(CONF.network.project_network_cidr) + num_bits = CONF.network.project_network_mask_bits + + result = None + str_cidr = None + # Repeatedly attempt subnet creation with sequential cidr + # blocks until an unallocated block is found. + for subnet_cidr in tenant_cidr.subnet(num_bits): + str_cidr = str(subnet_cidr) + if cidr_in_use(str_cidr, tenant_id=network['tenant_id']): + continue + + subnet = dict( + name=data_utils.rand_name(namestart), + network_id=network['id'], + tenant_id=network['tenant_id'], + cidr=str_cidr, + ip_version=ip_version, + **kwargs + ) + try: + result = subnets_client.create_subnet(**subnet) + break + except lib_exc.Conflict as e: + is_overlapping_cidr = 'overlaps with another subnet' in str(e) + if not is_overlapping_cidr: + raise + self.assertIsNotNone(result, 'Unable to allocate tenant network') + + subnet = result['subnet'] + self.assertEqual(subnet['cidr'], str_cidr) + + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + subnets_client.delete_subnet, subnet['id']) + + return subnet + + def _get_server_port_id_and_ip4(self, server, ip_addr=None): + ports = self.admin_manager.ports_client.list_ports( + device_id=server['id'], fixed_ip=ip_addr)['ports'] + # A port can have more than one IP address in some cases. + # If the network is dual-stack (IPv4 + IPv6), this port is associated + # with 2 subnets + p_status = ['ACTIVE'] + # NOTE(vsaienko) With Ironic, instances live on separate hardware + # servers. Neutron does not bind ports for Ironic instances, as a + # result the port remains in the DOWN state. + # TODO(vsaienko) remove once bug: #1599836 is resolved. + if getattr(CONF.service_available, 'ironic', False): + p_status.append('DOWN') + port_map = [(p["id"], fxip["ip_address"]) + for p in ports + for fxip in p["fixed_ips"] + if netutils.is_valid_ipv4(fxip["ip_address"]) + and p['status'] in p_status] + inactive = [p for p in ports if p['status'] != 'ACTIVE'] + if inactive: + LOG.warning("Instance has ports that are not ACTIVE: %s", inactive) + + self.assertNotEqual(0, len(port_map), + "No IPv4 addresses found in: %s" % ports) + self.assertEqual(len(port_map), 1, + "Found multiple IPv4 addresses: %s. " + "Unable to determine which port to target." + % port_map) + return port_map[0] + + def _get_network_by_name(self, network_name): + net = self.admin_manager.networks_client.list_networks( + name=network_name)['networks'] + self.assertNotEqual(len(net), 0, + "Unable to get network by name: %s" % network_name) + return net[0] + + def create_floating_ip(self, thing, external_network_id=None, + port_id=None, client=None): + """Create a floating IP and associates to a resource/port on Neutron""" + if not external_network_id: + external_network_id = CONF.network.public_network_id + if not client: + client = self.floating_ips_client + if not port_id: + port_id, ip4 = self._get_server_port_id_and_ip4(thing) + else: + ip4 = None + result = client.create_floatingip( + floating_network_id=external_network_id, + port_id=port_id, + tenant_id=thing['tenant_id'], + fixed_ip_address=ip4 + ) + floating_ip = result['floatingip'] + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + client.delete_floatingip, + floating_ip['id']) + return floating_ip + + def _associate_floating_ip(self, floating_ip, server): + port_id, _ = self._get_server_port_id_and_ip4(server) + kwargs = dict(port_id=port_id) + floating_ip = self.floating_ips_client.update_floatingip( + floating_ip['id'], **kwargs)['floatingip'] + self.assertEqual(port_id, floating_ip['port_id']) + return floating_ip + + def _disassociate_floating_ip(self, floating_ip): + """:param floating_ip: floating_ips_client.create_floatingip""" + kwargs = dict(port_id=None) + floating_ip = self.floating_ips_client.update_floatingip( + floating_ip['id'], **kwargs)['floatingip'] + self.assertIsNone(floating_ip['port_id']) + return floating_ip + + def check_floating_ip_status(self, floating_ip, status): + """Verifies floatingip reaches the given status + + :param dict floating_ip: floating IP dict to check status + :param status: target status + :raises: AssertionError if status doesn't match + """ + floatingip_id = floating_ip['id'] + + def refresh(): + result = (self.floating_ips_client. + show_floatingip(floatingip_id)['floatingip']) + return status == result['status'] + + test_utils.call_until_true(refresh, + CONF.network.build_timeout, + CONF.network.build_interval) + floating_ip = self.floating_ips_client.show_floatingip( + floatingip_id)['floatingip'] + self.assertEqual(status, floating_ip['status'], + message="FloatingIP: {fp} is at status: {cst}. " + "failed to reach status: {st}" + .format(fp=floating_ip, cst=floating_ip['status'], + st=status)) + LOG.info("FloatingIP: {fp} is at status: {st}" + .format(fp=floating_ip, st=status)) + + def _check_tenant_network_connectivity(self, server, + username, + private_key, + should_connect=True, + servers_for_debug=None): + if not CONF.network.project_networks_reachable: + msg = 'Tenant networks not configured to be reachable.' + LOG.info(msg) + return + # The target login is assumed to have been configured for + # key-based authentication by cloud-init. + try: + for net_name, ip_addresses in server['addresses'].items(): + for ip_address in ip_addresses: + self.check_vm_connectivity(ip_address['addr'], + username, + private_key, + should_connect=should_connect) + except Exception as e: + LOG.exception('Tenant network connectivity check failed') + self._log_console_output(servers_for_debug) + self._log_net_info(e) + raise + + def _check_remote_connectivity(self, source, dest, should_succeed=True, + nic=None): + """check ping server via source ssh connection + + :param source: RemoteClient: an ssh connection from which to ping + :param dest: and IP to ping against + :param should_succeed: boolean should ping succeed or not + :param nic: specific network interface to ping from + :returns: boolean -- should_succeed == ping + :returns: ping is false if ping failed + """ + def ping_remote(): + try: + source.ping_host(dest, nic=nic) + except lib_exc.SSHExecCommandFailed: + LOG.warning('Failed to ping IP: %s via a ssh connection ' + 'from: %s.', dest, source.ssh_client.host) + return not should_succeed + return should_succeed + + return test_utils.call_until_true(ping_remote, + CONF.validation.ping_timeout, + 1) + + def _create_security_group(self, security_group_rules_client=None, + tenant_id=None, + namestart='secgroup-smoke', + security_groups_client=None): + if security_group_rules_client is None: + security_group_rules_client = self.security_group_rules_client + if security_groups_client is None: + security_groups_client = self.security_groups_client + if tenant_id is None: + tenant_id = security_groups_client.tenant_id + secgroup = self._create_empty_security_group( + namestart=namestart, client=security_groups_client, + tenant_id=tenant_id) + + # Add rules to the security group + rules = self._create_loginable_secgroup_rule( + security_group_rules_client=security_group_rules_client, + secgroup=secgroup, + security_groups_client=security_groups_client) + for rule in rules: + self.assertEqual(tenant_id, rule['tenant_id']) + self.assertEqual(secgroup['id'], rule['security_group_id']) + return secgroup + + def _create_empty_security_group(self, client=None, tenant_id=None, + namestart='secgroup-smoke'): + """Create a security group without rules. + + Default rules will be created: + - IPv4 egress to any + - IPv6 egress to any + + :param tenant_id: secgroup will be created in this tenant + :returns: the created security group + """ + if client is None: + client = self.security_groups_client + if not tenant_id: + tenant_id = client.tenant_id + sg_name = data_utils.rand_name(namestart) + sg_desc = sg_name + " description" + sg_dict = dict(name=sg_name, + description=sg_desc) + sg_dict['tenant_id'] = tenant_id + result = client.create_security_group(**sg_dict) + + secgroup = result['security_group'] + self.assertEqual(secgroup['name'], sg_name) + self.assertEqual(tenant_id, secgroup['tenant_id']) + self.assertEqual(secgroup['description'], sg_desc) + + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + client.delete_security_group, secgroup['id']) + return secgroup + + def _default_security_group(self, client=None, tenant_id=None): + """Get default secgroup for given tenant_id. + + :returns: default secgroup for given tenant + """ + if client is None: + client = self.security_groups_client + if not tenant_id: + tenant_id = client.tenant_id + sgs = [ + sg for sg in list(client.list_security_groups().values())[0] + if sg['tenant_id'] == tenant_id and sg['name'] == 'default' + ] + msg = "No default security group for tenant %s." % (tenant_id) + self.assertGreater(len(sgs), 0, msg) + return sgs[0] + + def _create_security_group_rule(self, secgroup=None, + sec_group_rules_client=None, + tenant_id=None, + security_groups_client=None, **kwargs): + """Create a rule from a dictionary of rule parameters. + + Create a rule in a secgroup. if secgroup not defined will search for + default secgroup in tenant_id. + + :param secgroup: the security group. + :param tenant_id: if secgroup not passed -- the tenant in which to + search for default secgroup + :param kwargs: a dictionary containing rule parameters: + for example, to allow incoming ssh: + rule = { + direction: 'ingress' + protocol:'tcp', + port_range_min: 22, + port_range_max: 22 + } + """ + if sec_group_rules_client is None: + sec_group_rules_client = self.security_group_rules_client + if security_groups_client is None: + security_groups_client = self.security_groups_client + if not tenant_id: + tenant_id = security_groups_client.tenant_id + if secgroup is None: + secgroup = self._default_security_group( + client=security_groups_client, tenant_id=tenant_id) + + ruleset = dict(security_group_id=secgroup['id'], + tenant_id=secgroup['tenant_id']) + ruleset.update(kwargs) + + sg_rule = sec_group_rules_client.create_security_group_rule(**ruleset) + sg_rule = sg_rule['security_group_rule'] + + self.assertEqual(secgroup['tenant_id'], sg_rule['tenant_id']) + self.assertEqual(secgroup['id'], sg_rule['security_group_id']) + + return sg_rule + + def _create_loginable_secgroup_rule(self, security_group_rules_client=None, + secgroup=None, + security_groups_client=None): + """Create loginable security group rule + + This function will create: + 1. egress and ingress tcp port 22 allow rule in order to allow ssh + access for ipv4. + 2. egress and ingress ipv6 icmp allow rule, in order to allow icmpv6. + 3. egress and ingress ipv4 icmp allow rule, in order to allow icmpv4. + """ + + if security_group_rules_client is None: + security_group_rules_client = self.security_group_rules_client + if security_groups_client is None: + security_groups_client = self.security_groups_client + rules = [] + rulesets = [ + dict( + # ssh + protocol='tcp', + port_range_min=22, + port_range_max=22, + ), + dict( + # ping + protocol='icmp', + ), + dict( + # ipv6-icmp for ping6 + protocol='icmp', + ethertype='IPv6', + ) + ] + sec_group_rules_client = security_group_rules_client + for ruleset in rulesets: + for r_direction in ['ingress', 'egress']: + ruleset['direction'] = r_direction + try: + sg_rule = self._create_security_group_rule( + sec_group_rules_client=sec_group_rules_client, + secgroup=secgroup, + security_groups_client=security_groups_client, + **ruleset) + except lib_exc.Conflict as ex: + # if rule already exist - skip rule and continue + msg = 'Security group rule already exists' + if msg not in ex._error_string: + raise ex + else: + self.assertEqual(r_direction, sg_rule['direction']) + rules.append(sg_rule) + + return rules + + def _get_router(self, client=None, tenant_id=None): + """Retrieve a router for the given tenant id. + + If a public router has been configured, it will be returned. + + If a public router has not been configured, but a public + network has, a tenant router will be created and returned that + routes traffic to the public network. + """ + if not client: + client = self.routers_client + if not tenant_id: + tenant_id = client.tenant_id + router_id = CONF.network.public_router_id + network_id = CONF.network.public_network_id + if router_id: + body = client.show_router(router_id) + return body['router'] + elif network_id: + router = self._create_router(client, tenant_id) + kwargs = {'external_gateway_info': dict(network_id=network_id)} + router = client.update_router(router['id'], **kwargs)['router'] + return router + else: + raise Exception("Neither of 'public_router_id' or " + "'public_network_id' has been defined.") + + def _create_router(self, client=None, tenant_id=None, + namestart='router-smoke'): + if not client: + client = self.routers_client + if not tenant_id: + tenant_id = client.tenant_id + name = data_utils.rand_name(namestart) + result = client.create_router(name=name, + admin_state_up=True, + tenant_id=tenant_id) + router = result['router'] + self.assertEqual(router['name'], name) + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + client.delete_router, + router['id']) + return router + + def _update_router_admin_state(self, router, admin_state_up): + kwargs = dict(admin_state_up=admin_state_up) + router = self.routers_client.update_router( + router['id'], **kwargs)['router'] + self.assertEqual(admin_state_up, router['admin_state_up']) + + def create_networks(self, networks_client=None, + routers_client=None, subnets_client=None, + tenant_id=None, dns_nameservers=None, + port_security_enabled=True): + """Create a network with a subnet connected to a router. + + The baremetal driver is a special case since all nodes are + on the same shared network. + + :param tenant_id: id of tenant to create resources in. + :param dns_nameservers: list of dns servers to send to subnet. + :returns: network, subnet, router + """ + if CONF.network.shared_physical_network: + # NOTE(Shrews): This exception is for environments where tenant + # credential isolation is available, but network separation is + # not (the current baremetal case). Likely can be removed when + # test account mgmt is reworked: + # https://blueprints.launchpad.net/tempest/+spec/test-accounts + if not CONF.compute.fixed_network_name: + m = 'fixed_network_name must be specified in config' + raise lib_exc.InvalidConfiguration(m) + network = self._get_network_by_name( + CONF.compute.fixed_network_name) + router = None + subnet = None + else: + network = self._create_network( + networks_client=networks_client, + tenant_id=tenant_id, + port_security_enabled=port_security_enabled) + router = self._get_router(client=routers_client, + tenant_id=tenant_id) + subnet_kwargs = dict(network=network, + subnets_client=subnets_client, + routers_client=routers_client) + # use explicit check because empty list is a valid option + if dns_nameservers is not None: + subnet_kwargs['dns_nameservers'] = dns_nameservers + subnet = self._create_subnet(**subnet_kwargs) + if not routers_client: + routers_client = self.routers_client + router_id = router['id'] + routers_client.add_router_interface(router_id, + subnet_id=subnet['id']) + + # save a cleanup job to remove this association between + # router and subnet + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + routers_client.remove_router_interface, router_id, + subnet_id=subnet['id']) + return network, subnet, router + + +class EncryptionScenarioTest(ScenarioTest): + """Base class for encryption scenario tests""" + + credentials = ['primary', 'admin'] + + @classmethod + def setup_clients(cls): + super(EncryptionScenarioTest, cls).setup_clients() + if CONF.volume_feature_enabled.api_v2: + cls.admin_volume_types_client = cls.os_adm.volume_types_v2_client + cls.admin_encryption_types_client =\ + cls.os_adm.encryption_types_v2_client + else: + cls.admin_volume_types_client = cls.os_adm.volume_types_client + cls.admin_encryption_types_client =\ + cls.os_adm.encryption_types_client + + def create_encryption_type(self, client=None, type_id=None, provider=None, + key_size=None, cipher=None, + control_location=None): + if not client: + client = self.admin_encryption_types_client + if not type_id: + volume_type = self.create_volume_type() + type_id = volume_type['id'] + LOG.debug("Creating an encryption type for volume type: %s", type_id) + client.create_encryption_type( + type_id, provider=provider, key_size=key_size, cipher=cipher, + control_location=control_location)['encryption'] + + +class ObjectStorageScenarioTest(ScenarioTest): + """Provide harness to do Object Storage scenario tests. + + Subclasses implement the tests that use the methods provided by this + class. + """ + + @classmethod + def skip_checks(cls): + super(ObjectStorageScenarioTest, cls).skip_checks() + if not CONF.service_available.swift: + skip_msg = ("%s skipped as swift is not available" % + cls.__name__) + raise cls.skipException(skip_msg) + + @classmethod + def setup_credentials(cls): + cls.set_network_resources() + super(ObjectStorageScenarioTest, cls).setup_credentials() + operator_role = CONF.object_storage.operator_role + cls.os_operator = cls.get_client_manager(roles=[operator_role]) + + @classmethod + def setup_clients(cls): + super(ObjectStorageScenarioTest, cls).setup_clients() + # Clients for Swift + cls.account_client = cls.os_operator.account_client + cls.container_client = cls.os_operator.container_client + cls.object_client = cls.os_operator.object_client + + def get_swift_stat(self): + """get swift status for our user account.""" + self.account_client.list_account_containers() + LOG.debug('Swift status information obtained successfully') + + def create_container(self, container_name=None): + name = container_name or data_utils.rand_name( + 'swift-scenario-container') + self.container_client.create_container(name) + # look for the container to assure it is created + self.list_and_check_container_objects(name) + LOG.debug('Container %s created', name) + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + self.container_client.delete_container, + name) + return name + + def delete_container(self, container_name): + self.container_client.delete_container(container_name) + LOG.debug('Container %s deleted', container_name) + + def upload_object_to_container(self, container_name, obj_name=None): + obj_name = obj_name or data_utils.rand_name('swift-scenario-object') + obj_data = data_utils.random_bytes() + self.object_client.create_object(container_name, obj_name, obj_data) + self.addCleanup(test_utils.call_and_ignore_notfound_exc, + self.object_client.delete_object, + container_name, + obj_name) + return obj_name, obj_data + + def delete_object(self, container_name, filename): + self.object_client.delete_object(container_name, filename) + self.list_and_check_container_objects(container_name, + not_present_obj=[filename]) + + def list_and_check_container_objects(self, container_name, + present_obj=None, + not_present_obj=None): + # List objects for a given container and assert which are present and + # which are not. + if present_obj is None: + present_obj = [] + if not_present_obj is None: + not_present_obj = [] + _, object_list = self.container_client.list_container_contents( + container_name) + if present_obj: + for obj in present_obj: + self.assertIn(obj, object_list) + if not_present_obj: + for obj in not_present_obj: + self.assertNotIn(obj, object_list) + + def change_container_acl(self, container_name, acl): + metadata_param = {'metadata_prefix': 'x-container-', + 'metadata': {'read': acl}} + self.container_client.update_container_metadata(container_name, + **metadata_param) + resp, _ = self.container_client.list_container_metadata(container_name) + self.assertEqual(resp['x-container-read'], acl) + + def download_and_verify(self, container_name, obj_name, expected_data): + _, obj = self.object_client.get_object(container_name, obj_name) + self.assertEqual(obj, expected_data) diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/tests/unit/extensions/test_taas.py neutron-taas-2.0.0/neutron_taas/tests/unit/extensions/test_taas.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/tests/unit/extensions/test_taas.py 1970-01-01 00:00:00.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/tests/unit/extensions/test_taas.py 2017-08-10 19:02:30.000000000 +0000 @@ -0,0 +1,105 @@ +# Copyright 2017 FUJITSU LABORATORIES LTD. + +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at + +# http://www.apache.org/licenses/LICENSE-2.0 + +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import copy +import mock +from webob import exc + +from oslo_utils import uuidutils + +from neutron.tests.unit.api.v2 import test_base as test_api_v2 +from neutron.tests.unit.extensions import base as test_api_v2_extension + +from neutron_taas.extensions import taas as taas_ext + +_uuid = uuidutils.generate_uuid +_get_path = test_api_v2._get_path + +TAP_SERVICE_PATH = 'taas/tap_services' +TAP_FLOW_PATH = 'taas/tap_flows' + + +class TaasExtensionTestCase(test_api_v2_extension.ExtensionTestCase): + fmt = 'json' + + def setUp(self): + super(TaasExtensionTestCase, self).setUp() + self._setUpExtension( + 'neutron_taas.extensions.taas.TaasPluginBase', + 'TAAS', + taas_ext.RESOURCE_ATTRIBUTE_MAP, + taas_ext.Taas, + 'taas', + plural_mappings={} + ) + + def test_create_tap_service(self): + tenant_id = _uuid() + tap_service_data = { + 'tenant_id': tenant_id, + 'name': 'MyTap', + 'description': 'This is my tap service', + 'port_id': _uuid(), + 'project_id': tenant_id, + } + data = {'tap_service': tap_service_data} + expected_ret_val = copy.copy(data['tap_service']) + expected_ret_val.update({'id': _uuid()}) + instance = self.plugin.return_value + instance.create_tap_service.return_value = expected_ret_val + + res = self.api.post(_get_path(TAP_SERVICE_PATH, fmt=self.fmt), + self.serialize(data), + content_type='application/%s' % self.fmt) + instance.create_tap_service.assert_called_with( + mock.ANY, + tap_service=data) + self.assertEqual(exc.HTTPCreated.code, res.status_int) + res = self.deserialize(res) + self.assertIn('tap_service', res) + self.assertEqual(expected_ret_val, res['tap_service']) + + def test_delete_tap_service(self): + self._test_entity_delete('tap_service') + + def test_create_tap_flow(self): + tenant_id = _uuid() + tap_flow_data = { + 'tenant_id': tenant_id, + 'name': 'MyTapFlow', + 'description': 'This is my tap flow', + 'direction': 'BOTH', + 'tap_service_id': _uuid(), + 'source_port': _uuid(), + 'project_id': tenant_id, + } + data = {'tap_flow': tap_flow_data} + expected_ret_val = copy.copy(data['tap_flow']) + expected_ret_val.update({'id': _uuid()}) + instance = self.plugin.return_value + instance.create_tap_flow.return_value = expected_ret_val + + res = self.api.post(_get_path(TAP_FLOW_PATH, fmt=self.fmt), + self.serialize(data), + content_type='application/%s' % self.fmt) + instance.create_tap_flow.assert_called_with( + mock.ANY, + tap_flow=data) + self.assertEqual(exc.HTTPCreated.code, res.status_int) + res = self.deserialize(res) + self.assertIn('tap_flow', res) + self.assertEqual(expected_ret_val, res['tap_flow']) + + def test_delete_tap_flow(self): + self._test_entity_delete('tap_flow') diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/tests/unit/services/taas/test_taas_plugin.py neutron-taas-2.0.0/neutron_taas/tests/unit/services/taas/test_taas_plugin.py --- neutron-taas-1.0.1~git20170522.e15cbf3/neutron_taas/tests/unit/services/taas/test_taas_plugin.py 2017-05-22 09:22:02.000000000 +0000 +++ neutron-taas-2.0.0/neutron_taas/tests/unit/services/taas/test_taas_plugin.py 2017-08-10 19:02:30.000000000 +0000 @@ -20,6 +20,7 @@ from neutron_lib import context from neutron_lib.utils import net as n_utils +from oslo_config import cfg from oslo_utils import uuidutils import neutron.common.rpc as n_rpc @@ -130,6 +131,33 @@ with self.tap_service(): pass + def test_verify_taas_id_reused(self): + # make small range id + cfg.CONF.set_override("vlan_range_start", 1, group="taas") + cfg.CONF.set_override("vlan_range_end", 3, group="taas") + with self.tap_service() as ts_1, self.tap_service() as ts_2, \ + self.tap_service() as ts_3, self.tap_service() as ts_4: + ts_id_1 = ts_1['id'] + ts_id_2 = ts_2['id'] + ts_id_3 = ts_3['id'] + tap_id_assoc_1 = self._plugin.create_tap_id_association( + self._context, ts_id_1) + tap_id_assoc_2 = self._plugin.create_tap_id_association( + self._context, ts_id_2) + self.assertEqual(set([1, 2]), set([tap_id_assoc_1['taas_id'], + tap_id_assoc_2['taas_id']])) + with testtools.ExpectedException(taas_ext.TapServiceLimitReached): + self._plugin.create_tap_id_association( + self._context, + ts_4['id'] + ) + # free an tap_id and verify could reallocate same taas id + self._plugin.delete_tap_service(self._context, ts_id_1) + tap_id_assoc_3 = self._plugin.create_tap_id_association( + self._context, ts_id_3) + self.assertEqual(set([1, 2]), set([tap_id_assoc_3['taas_id'], + tap_id_assoc_2['taas_id']])) + def test_create_tap_service_wrong_tenant_id(self): self._port_details['tenant_id'] = 'other-tenant' with testtools.ExpectedException(taas_ext.PortDoesNotBelongToTenant), \ diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/PKG-INFO neutron-taas-2.0.0/PKG-INFO --- neutron-taas-1.0.1~git20170522.e15cbf3/PKG-INFO 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/PKG-INFO 1970-01-01 00:00:00.000000000 +0000 @@ -1,46 +0,0 @@ -Metadata-Version: 1.1 -Name: tap-as-a-service -Version: 1.0.1.dev15 -Summary: Tap-as-a-Service (TaaS) is an extension to the OpenStack network service (Neutron), it provides remote port mirroring capability for tenant virtual networks. -Home-page: http://www.openstack.org/ -Author: OpenStack -Author-email: openstack-dev@lists.openstack.org -License: UNKNOWN -Description: ================ - Tap as a Service - ================ - Tap-as-a-Service (TaaS) is an extension to the OpenStack network service (Neutron). - It provides remote port mirroring capability for tenant virtual networks. - - Port mirroring involves sending a copy of packets entering and/or leaving one - port to another port, which is usually different from the original destinations - of the packets being mirrored. - - - This service has been primarily designed to help tenants (or the cloud administrator) - debug complex virtual networks and gain visibility into their VMs, by monitoring the - network traffic associated with them. TaaS honors tenant boundaries and its mirror - sessions are capable of spanning across multiple compute and network nodes. It serves - as an essential infrastructure component that can be utilized for supplying data to a - variety of network analytics and security applications (e.g. IDS). - - * Free software: Apache license - * API Reference: https://github.com/openstack/tap-as-a-service/blob/master/API_REFERENCE.rst - * Source: https://git.openstack.org/cgit/openstack/tap-as-a-service - * Bugs: https://bugs.launchpad.net/tap-as-a-service - - For installing Tap-as-a-Service with Devstack please read the INSTALL.rst file - - -Platform: UNKNOWN -Classifier: Environment :: OpenStack -Classifier: Intended Audience :: Information Technology -Classifier: Intended Audience :: System Administrators -Classifier: License :: OSI Approved :: Apache Software License -Classifier: Operating System :: POSIX :: Linux -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 2 -Classifier: Programming Language :: Python :: 2.7 -Classifier: Programming Language :: Python :: 3 -Classifier: Programming Language :: Python :: 3.4 -Classifier: Programming Language :: Python :: 3.5 diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/requirements.txt neutron-taas-2.0.0/requirements.txt --- neutron-taas-1.0.1~git20170522.e15cbf3/requirements.txt 2017-05-22 09:22:02.000000000 +0000 +++ neutron-taas-2.0.0/requirements.txt 2017-08-10 19:02:30.000000000 +0000 @@ -2,5 +2,5 @@ # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. -pbr>=2.0.0 # Apache-2.0 -Babel>=2.3.4 # BSD +pbr!=2.1.0,>=2.0.0 # Apache-2.0 +Babel!=2.4.0,>=2.3.4 # BSD diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/setup.cfg neutron-taas-2.0.0/setup.cfg --- neutron-taas-1.0.1~git20170522.e15cbf3/setup.cfg 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/setup.cfg 2017-08-10 19:02:30.000000000 +0000 @@ -1,27 +1,26 @@ [metadata] name = tap-as-a-service summary = Tap-as-a-Service (TaaS) is an extension to the OpenStack network service (Neutron), it provides remote port mirroring capability for tenant virtual networks. -description-file = - README.rst +description-file = + README.rst author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = http://www.openstack.org/ -classifier = - Environment :: OpenStack - Intended Audience :: Information Technology - Intended Audience :: System Administrators - License :: OSI Approved :: Apache Software License - Operating System :: POSIX :: Linux - Programming Language :: Python - Programming Language :: Python :: 2 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.4 - Programming Language :: Python :: 3.5 +classifier = + Environment :: OpenStack + Intended Audience :: Information Technology + Intended Audience :: System Administrators + License :: OSI Approved :: Apache Software License + Operating System :: POSIX :: Linux + Programming Language :: Python + Programming Language :: Python :: 2 + Programming Language :: Python :: 2.7 + Programming Language :: Python :: 3 + Programming Language :: Python :: 3.5 [files] -packages = - neutron_taas +packages = + neutron_taas [build_sphinx] source-dir = doc/source @@ -46,23 +45,21 @@ output_file = neutron_taas/locale/neutron_taas.pot [entry_points] -console_scripts = - neutron-taas-openvswitch-agent = neutron_taas.services.taas.agents.ovs.agent:main -neutron.service_plugins = - taas = neutron_taas.services.taas.taas_plugin:TaasPlugin -neutron.db.alembic_migrations = - tap-as-a-service = neutron_taas.db.migration:alembic_migration -tempest.test_plugins = - tap-as-a-service = neutron_taas.tests.tempest_plugin.plugin:NeutronTaaSPlugin -neutronclient.extension = - tap_service = neutron_taas.taas_client.tapservice - tap_flow = neutron_taas.taas_client.tapflow +neutron.agent.l2.extensions = + taas = neutron_taas.services.taas.agents.extensions.taas:TaasAgentExtension +neutron_taas.taas.agent_drivers = + ovs = neutron_taas.services.taas.drivers.linux.ovs_taas:OvsTaasDriver +neutron.service_plugins = + taas = neutron_taas.services.taas.taas_plugin:TaasPlugin +neutron.db.alembic_migrations = + tap-as-a-service = neutron_taas.db.migration:alembic_migration +tempest.test_plugins = + tap-as-a-service = neutron_taas.tests.tempest_plugin.plugin:NeutronTaaSPlugin +neutronclient.extension = + tap_service = neutron_taas.taas_client.tapservice + tap_flow = neutron_taas.taas_client.tapflow [pbr] autodoc_index_modules = True warnerrors = True -[egg_info] -tag_build = -tag_date = 0 - diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/dependency_links.txt neutron-taas-2.0.0/tap_as_a_service.egg-info/dependency_links.txt --- neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/dependency_links.txt 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/tap_as_a_service.egg-info/dependency_links.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ - diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/entry_points.txt neutron-taas-2.0.0/tap_as_a_service.egg-info/entry_points.txt --- neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/entry_points.txt 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/tap_as_a_service.egg-info/entry_points.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1,16 +0,0 @@ -[console_scripts] -neutron-taas-openvswitch-agent = neutron_taas.services.taas.agents.ovs.agent:main - -[neutron.db.alembic_migrations] -tap-as-a-service = neutron_taas.db.migration:alembic_migration - -[neutron.service_plugins] -taas = neutron_taas.services.taas.taas_plugin:TaasPlugin - -[neutronclient.extension] -tap_flow = neutron_taas.taas_client.tapflow -tap_service = neutron_taas.taas_client.tapservice - -[tempest.test_plugins] -tap-as-a-service = neutron_taas.tests.tempest_plugin.plugin:NeutronTaaSPlugin - diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/not-zip-safe neutron-taas-2.0.0/tap_as_a_service.egg-info/not-zip-safe --- neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/not-zip-safe 2017-03-09 16:04:17.000000000 +0000 +++ neutron-taas-2.0.0/tap_as_a_service.egg-info/not-zip-safe 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ - diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/pbr.json neutron-taas-2.0.0/tap_as_a_service.egg-info/pbr.json --- neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/pbr.json 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/tap_as_a_service.egg-info/pbr.json 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -{"git_version": "e15cbf3", "is_release": false} \ No newline at end of file diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/PKG-INFO neutron-taas-2.0.0/tap_as_a_service.egg-info/PKG-INFO --- neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/PKG-INFO 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/tap_as_a_service.egg-info/PKG-INFO 1970-01-01 00:00:00.000000000 +0000 @@ -1,46 +0,0 @@ -Metadata-Version: 1.1 -Name: tap-as-a-service -Version: 1.0.1.dev15 -Summary: Tap-as-a-Service (TaaS) is an extension to the OpenStack network service (Neutron), it provides remote port mirroring capability for tenant virtual networks. -Home-page: http://www.openstack.org/ -Author: OpenStack -Author-email: openstack-dev@lists.openstack.org -License: UNKNOWN -Description: ================ - Tap as a Service - ================ - Tap-as-a-Service (TaaS) is an extension to the OpenStack network service (Neutron). - It provides remote port mirroring capability for tenant virtual networks. - - Port mirroring involves sending a copy of packets entering and/or leaving one - port to another port, which is usually different from the original destinations - of the packets being mirrored. - - - This service has been primarily designed to help tenants (or the cloud administrator) - debug complex virtual networks and gain visibility into their VMs, by monitoring the - network traffic associated with them. TaaS honors tenant boundaries and its mirror - sessions are capable of spanning across multiple compute and network nodes. It serves - as an essential infrastructure component that can be utilized for supplying data to a - variety of network analytics and security applications (e.g. IDS). - - * Free software: Apache license - * API Reference: https://github.com/openstack/tap-as-a-service/blob/master/API_REFERENCE.rst - * Source: https://git.openstack.org/cgit/openstack/tap-as-a-service - * Bugs: https://bugs.launchpad.net/tap-as-a-service - - For installing Tap-as-a-Service with Devstack please read the INSTALL.rst file - - -Platform: UNKNOWN -Classifier: Environment :: OpenStack -Classifier: Intended Audience :: Information Technology -Classifier: Intended Audience :: System Administrators -Classifier: License :: OSI Approved :: Apache Software License -Classifier: Operating System :: POSIX :: Linux -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 2 -Classifier: Programming Language :: Python :: 2.7 -Classifier: Programming Language :: Python :: 3 -Classifier: Programming Language :: Python :: 3.4 -Classifier: Programming Language :: Python :: 3.5 diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/requires.txt neutron-taas-2.0.0/tap_as_a_service.egg-info/requires.txt --- neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/requires.txt 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/tap_as_a_service.egg-info/requires.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1,2 +0,0 @@ -Babel>=2.3.4 -pbr>=2.0.0 diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/SOURCES.txt neutron-taas-2.0.0/tap_as_a_service.egg-info/SOURCES.txt --- neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/SOURCES.txt 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/tap_as_a_service.egg-info/SOURCES.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1,115 +0,0 @@ -.coveragerc -.mailmap -.testr.conf -API_REFERENCE.rst -AUTHORS -CONTRIBUTING.rst -ChangeLog -HACKING.rst -INSTALL.rst -LICENSE -MANIFEST.in -README.rst -babel.cfg -openstack-common.conf -requirements.txt -setup.cfg -setup.py -test-requirements.txt -tox.ini -devstack/README.rst -devstack/devstackgaterc -devstack/plugin.sh -devstack/settings -doc/source/api_reference.rst -doc/source/conf.py -doc/source/contributing.rst -doc/source/index.rst -doc/source/installation.rst -doc/source/presentations.rst -doc/source/readme.rst -doc/source/specs -etc/taas.ini -etc/taas_plugin.ini -neutron_taas/__init__.py -neutron_taas/_i18n.py -neutron_taas/common/__init__.py -neutron_taas/common/constants.py -neutron_taas/common/topics.py -neutron_taas/db/__init__.py -neutron_taas/db/head.py -neutron_taas/db/taas_db.py -neutron_taas/db/migration/__init__.py -neutron_taas/db/migration/taas_init_ops.py -neutron_taas/db/migration/alembic_migration/README -neutron_taas/db/migration/alembic_migration/__init__.py -neutron_taas/db/migration/alembic_migration/env.py -neutron_taas/db/migration/alembic_migration/script.py.mako -neutron_taas/db/migration/alembic_migration/versions/CONTRACT_HEAD -neutron_taas/db/migration/alembic_migration/versions/EXPAND_HEAD -neutron_taas/db/migration/alembic_migration/versions/start_neutron_taas.py -neutron_taas/db/migration/alembic_migration/versions/newton/contract/1817af933379_remove_network_id_from_tap_service.py -neutron_taas/db/migration/alembic_migration/versions/newton/contract/2ecce0368a62_add_foreign_key_constraint_on_tap_id_association.py -neutron_taas/db/migration/alembic_migration/versions/newton/contract/4086b3cffc01_rename_tenant_to_project.py -neutron_taas/db/migration/alembic_migration/versions/newton/contract/80c85b675b6e_initial_newton_no_op_contract_script.py -neutron_taas/db/migration/alembic_migration/versions/newton/expand/04625466c6fa_initial_newton_no_op_expand_script.py -neutron_taas/db/migration/alembic_migration/versions/newton/expand/fddbdec8711a_add_status.py -neutron_taas/extensions/__init__.py -neutron_taas/extensions/taas.py -neutron_taas/services/__init__.py -neutron_taas/services/taas/__init__.py -neutron_taas/services/taas/taas_plugin.py -neutron_taas/services/taas/agents/__init__.py -neutron_taas/services/taas/agents/taas_agent_api.py -neutron_taas/services/taas/agents/ovs/__init__.py -neutron_taas/services/taas/agents/ovs/agent.py -neutron_taas/services/taas/agents/ovs/taas_ovs_agent.py -neutron_taas/services/taas/drivers/__init__.py -neutron_taas/services/taas/drivers/taas_base.py -neutron_taas/services/taas/drivers/linux/__init__.py -neutron_taas/services/taas/drivers/linux/ovs_constants.py -neutron_taas/services/taas/drivers/linux/ovs_taas.py -neutron_taas/services/taas/drivers/linux/ovs_utils.py -neutron_taas/services/taas/service_drivers/__init__.py -neutron_taas/services/taas/service_drivers/service_driver_context.py -neutron_taas/services/taas/service_drivers/taas_agent_api.py -neutron_taas/services/taas/service_drivers/taas_rpc.py -neutron_taas/taas_client/__init__.py -neutron_taas/taas_client/tapflow.py -neutron_taas/taas_client/tapservice.py -neutron_taas/tests/__init__.py -neutron_taas/tests/tempest_plugin/__init__.py -neutron_taas/tests/tempest_plugin/plugin.py -neutron_taas/tests/tempest_plugin/services/__init__.py -neutron_taas/tests/tempest_plugin/services/client.py -neutron_taas/tests/tempest_plugin/tests/__init__.py -neutron_taas/tests/tempest_plugin/tests/taas_client.py -neutron_taas/tests/tempest_plugin/tests/api/__init__.py -neutron_taas/tests/tempest_plugin/tests/api/base.py -neutron_taas/tests/tempest_plugin/tests/api/test_taas.py -neutron_taas/tests/tempest_plugin/tests/scenario/__init__.py -neutron_taas/tests/tempest_plugin/tests/scenario/base.py -neutron_taas/tests/tempest_plugin/tests/scenario/test_taas.py -neutron_taas/tests/unit/__init__.py -neutron_taas/tests/unit/db/__init__.py -neutron_taas/tests/unit/db/test_migrations.py -neutron_taas/tests/unit/db/test_taas_db.py -neutron_taas/tests/unit/services/__init__.py -neutron_taas/tests/unit/services/taas/__init__.py -neutron_taas/tests/unit/services/taas/test_taas_plugin.py -neutron_taas/tests/unit/taas_client/__init__.py -neutron_taas/tests/unit/taas_client/test_cli20_tapflow.py -neutron_taas/tests/unit/taas_client/test_cli20_tapservice.py -specs/index.rst -specs/mitaka/tap-as-a-service.rst -tap_as_a_service.egg-info/PKG-INFO -tap_as_a_service.egg-info/SOURCES.txt -tap_as_a_service.egg-info/dependency_links.txt -tap_as_a_service.egg-info/entry_points.txt -tap_as_a_service.egg-info/not-zip-safe -tap_as_a_service.egg-info/pbr.json -tap_as_a_service.egg-info/requires.txt -tap_as_a_service.egg-info/top_level.txt -tools/test-setup.sh -tools/tox_install.sh -tools/tox_install_project.sh \ No newline at end of file diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/top_level.txt neutron-taas-2.0.0/tap_as_a_service.egg-info/top_level.txt --- neutron-taas-1.0.1~git20170522.e15cbf3/tap_as_a_service.egg-info/top_level.txt 2017-05-22 09:31:38.000000000 +0000 +++ neutron-taas-2.0.0/tap_as_a_service.egg-info/top_level.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -neutron_taas diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/test-requirements.txt neutron-taas-2.0.0/test-requirements.txt --- neutron-taas-1.0.1~git20170522.e15cbf3/test-requirements.txt 2017-05-22 09:22:02.000000000 +0000 +++ neutron-taas-2.0.0/test-requirements.txt 2017-08-10 19:02:30.000000000 +0000 @@ -4,9 +4,9 @@ hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0 -coverage>=4.0 # Apache-2.0 +coverage!=4.4,>=4.0 # Apache-2.0 python-subunit>=0.0.18 # Apache-2.0/BSD -sphinx>=1.5.1 # BSD +sphinx>=1.6.2 # BSD psycopg2>=2.5 # LGPL/ZPL PyMySQL>=0.7.6 # MIT License oslosphinx>=4.7.0 # Apache-2.0 diff -Nru neutron-taas-1.0.1~git20170522.e15cbf3/tox.ini neutron-taas-2.0.0/tox.ini --- neutron-taas-1.0.1~git20170522.e15cbf3/tox.ini 2017-03-09 16:04:04.000000000 +0000 +++ neutron-taas-2.0.0/tox.ini 2017-08-10 19:02:30.000000000 +0000 @@ -1,5 +1,5 @@ [tox] -envlist = docs,py35,py34,py27,pep8 +envlist = docs,py35,py27,pep8 minversion = 1.8 skipsdist = True