LBaaS TLS with non-admin tenant is user unfriendly

Bug #1592612 reported by Jiahao liang
62
This bug affects 11 people
Affects Status Importance Assigned to Milestone
Barbican
Won't Fix
Wishlist
Douglas Mendizábal
octavia
Invalid
High
Unassigned

Bug Description

I went through https://wiki.openstack.org/wiki/Network/LBaaS/docs/how-to-create-tls-loadbalancer with devstack. And all my branches were set to stable/mitaka.

If I set my user and tenant as "admin admin", the workflow passed.
But it failed if I set the user and tenant to "admin demo" and rerun all the steps.

Steps to reproduce:
1. source ~/devstack/openrc admin demo
2. barbican secret store --payload-content-type='text/plain' --name='certificate' --payload="$(cat server.crt)"
3. barbican secret store --payload-content-type='text/plain' --name='private_key' --payload="$(cat server.key)"
4 .barbican secret container create --name='tls_container' --type='certificate' --secret="certificate=$(barbican secret list | awk '/ certificate / {print $2}')" --secret="private_key=$(barbican secret list | awk '/ private_key / {print $2}')"
5. neutron lbaas-loadbalancer-create $(neutron subnet-list | awk '/ private-subnet / {print $2}') --name lb1
6. neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(barbican secret container list | awk '/ tls_container / {print $2}')

The error msg I got is
$ neutron lbaas-listener-create --loadbalancer 738689bd-b54e-485e-b742-57bd6e812270 --protocol-port 443 --protocol TERMINATED_HTTPS --name listener2 --default-tls-container=$(barbican secret container list | awk '/ tls_container / {print $2}')
WARNING:barbicanclient.barbican:This Barbican CLI interface has been deprecated and will be removed in the O release. Please use the openstack unified client instead.
DEBUG:stevedore.extension:found extension EntryPoint.parse('table = cliff.formatters.table:TableFormatter')
DEBUG:stevedore.extension:found extension EntryPoint.parse('json = cliff.formatters.json_format:JSONFormatter')
DEBUG:stevedore.extension:found extension EntryPoint.parse('csv = cliff.formatters.commaseparated:CSVLister')
DEBUG:stevedore.extension:found extension EntryPoint.parse('value = cliff.formatters.value:ValueFormatter')
DEBUG:stevedore.extension:found extension EntryPoint.parse('yaml = cliff.formatters.yaml_format:YAMLFormatter')
DEBUG:barbicanclient.client:Creating Client object
DEBUG:barbicanclient.containers:Listing containers - offset 0 limit 10 name None type None
DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://192.168.100.148:5000/v2.0/tokens
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 192.168.100.148
Starting new HTTP connection (1): 192.168.100.148
DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 3924
DEBUG:keystoneclient.session:REQ: curl -g -i -X GET http://192.168.100.148:9311 -H "Accept: application/json" -H "User-Agent: python-keystoneclient"
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 192.168.100.148
Starting new HTTP connection (1): 192.168.100.148
DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 300 353
DEBUG:keystoneclient.session:RESP: [300] Content-Length: 353 Content-Type: application/json; charset=UTF-8 Connection: close
RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2015-04-28T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.key-manager-v1+json"}], "id": "v1", "links": [{"href": "http://192.168.100.148:9311/v1/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}
DEBUG:keystoneclient.session:REQ: curl -g -i -X GET http://192.168.100.148:9311/v1/containers -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}203d7de65f6cfb1fb170437ae2da98fef35f0942"
INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: 192.168.100.148
Resetting dropped connection: 192.168.100.148
DEBUG:requests.packages.urllib3.connectionpool:"GET /v1/containers?limit=10&offset=0 HTTP/1.1" 200 585
DEBUG:keystoneclient.session:RESP: [200] Connection: close Content-Type: application/json; charset=UTF-8 Content-Length: 585 x-openstack-request-id: req-aa4bb861-3d1d-42c6-be3d-5d3935622043
RESP BODY: {"total": 1, "containers": [{"status": "ACTIVE", "updated": "2016-06-10T01:14:45", "name": "tls_container", "consumers": [], "created": "2016-06-10T01:14:45", "container_ref": "http://192.168.100.148:9311/v1/containers/4ca420a1-ed23-4e91-a08a-311dad3df801", "creator_id": "9ee7d4959bc74d2988d50e0e3a965c64", "secret_refs": [{"secret_ref": "http://192.168.100.148:9311/v1/secrets/c96944b3-174e-418f-8598-8979eafaa537", "name": "certificate"}, {"secret_ref": "http://192.168.100.148:9311/v1/secrets/2e25ad05-ecd6-43bd-95fa-046b9cbe2600", "name": "private_key"}], "type": "certificate"}]}
DEBUG:barbicanclient.client:Response status 200
DEBUG:barbicanclient.secrets:Getting secret - Secret href: http://192.168.100.148:9311/v1/secrets/2e25ad05-ecd6-43bd-95fa-046b9cbe2600
DEBUG:barbicanclient.secrets:Getting secret - Secret href: http://192.168.100.148:9311/v1/secrets/c96944b3-174e-418f-8598-8979eafaa537
TLS container http://192.168.100.148:9311/v1/containers/4ca420a1-ed23-4e91-a08a-311dad3df801 could not be found
Neutron server returns request_ids: ['req-82d53607-3596-4eeb-b4ac-b96d9f861dc0']

============================

The related barbican-svc log:
2016-06-10 12:25:26.135 INFO barbican.api.controllers.containers [req-e7b592d4-376a-4729-ad20-5dfe9e93b6a4 d2d0cb2842eb450ebe032d70bcae
eb3b 9b07426f96574e27a18e596fb15ee5ec] Retrieved container list for project: 9b07426f96574e27a18e596fb15ee5ec
2016-06-10 12:25:26.137 INFO barbican.api.middleware.context [req-e7b592d4-376a-4729-ad20-5dfe9e93b6a4 d2d0cb2842eb450ebe032d70bcaeeb3b
 9b07426f96574e27a18e596fb15ee5ec] Processed request: 200 OK - GET http://192.168.100.149:9311/v1/containers?limit=10&offset=0
{address space usage: 215629824 bytes/205MB} {rss usage: 100933632 bytes/96MB} [pid: 4671|app: 0|req: 117/117] 192.168.100.149 () {30 v
ars in 465 bytes} [Fri Jun 10 12:25:25 2016] GET /v1/containers?limit=10&offset=0 => generated 585 bytes in 155 msecs (HTTP/1.1 200) 4
headers in 172 bytes (1 switches on core 0)
2016-06-10 12:25:28.183 ERROR barbican.model.repositories [req-4aebc499-b92d-4ab1-8b0e-52f12ddabdd2 d2d0cb2842eb450ebe032d70bcaeeb3b d2
4f00aff0b24f4ea7f37d193129d532] Not found for 8daec3a0-1582-4d59-ba04-be11d0c2d036
2016-06-10 12:25:28.183 TRACE barbican.model.repositories Traceback (most recent call last):
2016-06-10 12:25:28.183 TRACE barbican.model.repositories File "/opt/stack/barbican/barbican/model/repositories.py", line 358, in get
2016-06-10 12:25:28.183 TRACE barbican.model.repositories entity = query.one()
2016-06-10 12:25:28.183 TRACE barbican.model.repositories File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line
 2699, in one
2016-06-10 12:25:28.183 TRACE barbican.model.repositories raise orm_exc.NoResultFound("No row was found for one()")
2016-06-10 12:25:28.183 TRACE barbican.model.repositories NoResultFound: No row was found for one()
2016-06-10 12:25:28.183 TRACE barbican.model.repositories
2016-06-10 12:25:28.184 ERROR barbican.api.controllers [req-4aebc499-b92d-4ab1-8b0e-52f12ddabdd2 d2d0cb2842eb450ebe032d70bcaeeb3b d24f00aff0b24f4ea7f37d193129d532] Webob error seen
2016-06-10 12:25:28.184 TRACE barbican.api.controllers Traceback (most recent call last):
2016-06-10 12:25:28.184 TRACE barbican.api.controllers File "/opt/stack/barbican/barbican/api/controllers/__init__.py", line 102, in handler
2016-06-10 12:25:28.184 TRACE barbican.api.controllers return fn(inst, *args, **kwargs)
2016-06-10 12:25:28.184 TRACE barbican.api.controllers File "/opt/stack/barbican/barbican/api/controllers/__init__.py", line 88, in enforcer
2016-06-10 12:25:28.184 TRACE barbican.api.controllers return fn(inst, *args, **kwargs)
2016-06-10 12:25:28.184 TRACE barbican.api.controllers File "/opt/stack/barbican/barbican/api/controllers/__init__.py", line 144, in content_types_enforcer
2016-06-10 12:25:28.184 TRACE barbican.api.controllers return fn(inst, *args, **kwargs)
2016-06-10 12:25:28.184 TRACE barbican.api.controllers File "/opt/stack/barbican/barbican/api/controllers/consumers.py", line 143, in on_post
2016-06-10 12:25:28.184 TRACE barbican.api.controllers controllers.containers.container_not_found()
2016-06-10 12:25:28.184 TRACE barbican.api.controllers File "/opt/stack/barbican/barbican/api/controllers/containers.py", line 36, in container_not_found
2016-06-10 12:25:28.184 TRACE barbican.api.controllers pecan.abort(404, u._('Not Found. Sorry but your container is in '
2016-06-10 12:25:28.184 TRACE barbican.api.controllers File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 141, in abort
2016-06-10 12:25:28.184 TRACE barbican.api.controllers exec('raise webob_exception, None, traceback')
2016-06-10 12:25:28.184 TRACE barbican.api.controllers File "/opt/stack/barbican/barbican/api/controllers/consumers.py", line 141, in on_post
2016-06-10 12:25:28.184 TRACE barbican.api.controllers external_project_id)
2016-06-10 12:25:28.184 TRACE barbican.api.controllers File "/opt/stack/barbican/barbican/model/repositories.py", line 364, in get
2016-06-10 12:25:28.184 TRACE barbican.api.controllers _raise_entity_not_found(self._do_entity_name(), entity_id)
2016-06-10 12:25:28.184 TRACE barbican.api.controllers File "/opt/stack/barbican/barbican/model/repositories.py", line 2250, in _raise_entity_not_found
2016-06-10 12:25:28.184 TRACE barbican.api.controllers id=entity_id))
2016-06-10 12:25:28.184 TRACE barbican.api.controllers HTTPNotFound: Not Found. Sorry but your container is in another castle.
2016-06-10 12:25:28.184 TRACE barbican.api.controllers
2016-06-10 12:25:28.187 INFO barbican.api.middleware.context [req-4aebc499-b92d-4ab1-8b0e-52f12ddabdd2 d2d0cb2842eb450ebe032d70bcaeeb3b d24f00aff0b24f4ea7f37d193129d532] Processed request: 404 Not Found - POST http://192.168.100.149:9311/v1/containers/8daec3a0-1582-4d59-ba04-be11d0c2d036/consumers/

Revision history for this message
Jiahao liang (jiahao.liang) wrote :

For your reference, my local.conf is as following:

[[local|localrc]]

# The name of the RECLONE environment variable is a bit misleading. It doesn't actually
# reclone repositories, rather it uses git fetch to make sure the repos are current.

RECLONE=True

# Load the external LBaaS plugin.

enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas stable/mitaka
enable_plugin octavia https://git.openstack.org/openstack/octavia stable/mitaka
#enable_plugin octavia /opt/stack/octavia hot-fix
enable_plugin barbican https://git.openstack.org/openstack/barbican stable/mitaka
enable_plugin neutron-lbaas-dashboard https://git.openstack.org/openstack/neutron-lbaas-dashboard stable/mitaka

GLANCE_BRANCH=stable/mitaka
HORIZON_BRANCH=stable/mitaka
KEYSTONE_BRANCH=stable/mitaka
KESYTONECLIENT_BRANCH=stable/mitaka
NOVA_BRANCH=stable/mitaka
NOVACLIENT_BRANCH=stable/mitaka
NEUTRON_BRANCH=stable/mitaka
HEAT_BRANCH=stable/mitaka
CEILOMETER_BRANCH=stable/mitaka
SWIFT_BRANCH=stable/mitaka
CINDER_BRANCH=stable/mitaka

LIBS_FROM_GIT+=python-neutronclient
DATABASE_PASSWORD=password
ADMIN_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
RABBIT_PASSWORD=password

# Enable Logging
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=$DEST/logs

# Pre-requisites
enable_service rabbit
enable_service mysql
enable_service key

# Horizon
enable_service horizon

# Nova
enable_service n-api
enable_service n-crt
enable_service n-cpu
enable_service n-cond
enable_service n-sch

# Glance
enable_service g-api
enable_service g-reg

# Neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta

# Cinder
enable_service c-api
enable_service c-vol
enable_service c-sch

# LBaaS V2 and Octavia
enable_service q-lbaasv2
enable_service octavia
enable_service o-cw
enable_service o-hm
enable_service o-hk
enable_service o-api
OCTAVIA_MGMT_SUBNET="192.168.26.0/24"
OCTAVIA_MGMT_SUBNET_START="192.168.26.2"
OCTAVIA_MGMT_SUBNET_END="192.168.26.200"
# enable DVR

Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_DVR_MODE=dvr_snat

IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"

LOGFILE=$DEST/logs/stack.sh.log

# Old log files are automatically removed after 7 days to keep things neat. Change
# the number of days by setting ``LOGDAYS``.
LOGDAYS=2

Revision history for this message
Adam Harwell (adam-harwell) wrote :

This is actually caused by: https://bugs.launchpad.net/barbican/+bug/1519170
Marked as duplicate, reviving that CR (which should be somewhat close to done).

Revision history for this message
Jiahao liang (jiahao.liang) wrote :

Thank you Adam to confirm that.

Revision history for this message
Praveen Yalagandula (ypraveen-5) wrote :

I ran into this too and this looks like a show-stopper for using barbican. Is there a workaround?

Revision history for this message
Jiahao liang (jiahao.liang) wrote :

Hi guys,

This bug was previously marked as a duplicate of bug #1519170.

However, Lbaas TERMINATED_HTTPS is still not working for non-admin tenant even with bug #1519170 fixed.

I am going to reraise this bug and I assume that bug #1497410 and bug #1612588 will also have the same issue.

My test env is barbican(master), all other components are from stable/mitaka branch

The error I got was:
# source /home/stack/devstack/openrc admin demo
# neutron lbaas-listener-create --loadbalancer 40e04e16-4d84-46d8-8dcd-6717a734d37e --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container

http://192.168.200.43:9311/v1/containers/2817e144-9f11-4bed-a14e-0390edf89659
TLS container http://192.168.200.43:9311/v1/containers/2817e144-9f11-4bed-a14e-0390edf89659 is invalid. Forbidden
Neutron server returns request_ids: ['req-6116a004-c17e-4a50-9ad6-f17380ce011a']

Related q-svc.log and barbican-svc.log will be attached on next comment.

I made a few breakpoint to traceback the issue and that's where I fount out the last error:
traceback neutron_lbaas/services/loadbalancer/plugin.py:730 self._validate_tls(listener)
traceback neutron_lbaas/services/loadbalancer/plugin.py:657 cert_parser.validate_cert(cert_container.get_certificate(),
traceback neutron_lbaas/common/cert_manager/barbican_cert_manager.py:45 return self._cert_container.certificate.payload
traceback python2.7/site-packages/barbicanclient/secrets.py:192 self._fetch_payload()
traceback python2.7/site-packages/barbicanclient/secrets.py:260 if not self.payload_content_type and not self.content_types:
traceback python2.7/site-packages/barbicanclient/secrets.py:34 self._fill_lazy_properties()
traceback python2.7/site-packages/barbicanclient/secrets.py:416 result = self._api.get(self._secret_ref)
Processed request: 403 Forbidden - GET http://192.168.200.43:9311/v1/secrets/469fe858-44cc-431d-9c7c-a6d7936ed56c/payload

==================================
I dumped the request and found the X-Auth-Token header is actually the token for admin tenant instead of demo tenant.
Also in /etc/barbican/policy.json, If I change "secret:get" to "rule:all_users", the issue will be solved.

I believe some work need to be done either on barbican client or the policy.json.

Revision history for this message
Jiahao liang (jiahao.liang) wrote :
Download full text (9.0 KiB)

q-svc.log:

2016-09-20 18:16:07.544 25054 WARNING oslo_config.cfg [req-82cbc8b5-70af-496c-8e37-e74275567452 admin -] Option "policy_dirs" from group "DEFAULT" is deprecated. Use option "policy_dirs" from group "oslo_policy".
2-b890-48c4-9926-3c53f167b5e1 admin -] Request body: {u'listener': {u'protocol': u'TERMINATED_HTTPS', u'name': u'listener1', u'default_tls_container_ref': u'http://192.168.200.43:9311/v1/containers/2817e144-9f11-4bed-a14e-0390edf89659', u'admin_state_up': True, u'protocol_port': u'443', u'loadbalancer_id': u'40e04e16-4d84-46d8-8dcd-6717a734d37e'}} prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:660
2016-09-20 18:16:08.070 25054 INFO neutron.quota [req-9ca04dda-b890-48c4-9926-3c53f167b5e1 admin -] Loaded quota_driver: <neutron.db.quota.driver.DbQuotaDriver object at 0x7ffd1042d190>.
2016-09-20 18:16:08.078 25054 DEBUG neutron.db.quota.driver [req-9ca04dda-b890-48c4-9926-3c53f167b5e1 admin -] Resources network_service_policy,external_policy,policy_target,policy_rule_set,vpnservice,servicechain_spec,port,subnet,external_segment,network,ipsec_site_connection,floatingip,security_group_rule,service_profile,policy_rule,endpoint_group,ikepolicy,ipsecpolicy,policy_classifier,l7policy,subnetpool,servicechain_node,listener,nat_pool,policy_target_group,l3_policy,policy_action,servicechain_instance,healthmonitor,security_group,router,l2_policy have unlimited quota limit. It is not required to calculated headroom make_reservation /opt/stack/neutron/neutron/db/quota/driver.py:170016-09-20 18:16:08.078 25054 DEBUG neutron.db.quota.driver [req-9ca04dda-b890-48c4-9926-3c53f167b5e1 admin -] Resources network_service_policy,external_policy,policy_target,policy_rule_set,vpnservice,servicechain_spec,port,subnet,external_segment,network,ipsec_site_connection,floatingip,security_group_rule,service_profile,policy_rule,endpoint_group,ikepolicy,ipsecpolicy,policy_classifier,l7policy,subnetpool,servicechain_node,listener,nat_pool,policy_target_group,l3_policy,policy_action,servicechain_instance,healthmonitor,security_group,router,l2_policy have unlimited quota limit. It is not required to calculated headroom make_reservation /opt/stack/neutron/neutron/db/quota/driver.py:170
2016-09-20 18:16:08.246 25054 DEBUG barbicanclient.client [req-9ca04dda-b890-48c4-9926-3c53f167b5e1 admin -] Creating Client object get_barbican_client /opt/stack/neutron-lbaas/neutron_lbaas/common/cert_manager/barbican_auth/barbican_acl.py:43
2016-09-20 18:16:08.247 25054 INFO neutron_lbaas.common.cert_manager.barbican_cert_manager [req-9ca04dda-b890-48c4-9926-3c53f167b5e1 admin -] Loading certificate container http://192.168.200.43:9311/v1/containers/2817e144-9f11-4bed-a14e-0390edf89659 from Barbican.
2016-09-20 18:16:08.248 25054 DEBUG barbicanclient.containers [req-9ca04dda-b890-48c4-9926-3c53f167b5e1 admin -] Creating consumer registration for container http://192.168.200.43:9311/v1/containers/2817e144-9f11-4bed-a14e-0390edf89659 as lbaas: lbaas://RegionOne/loadbalancer/40e04e16-4d84-46d8-8dcd-6717a734d37e get_cert /opt/stack/neutron-lbaas/neutron_lbaas/common/cert_manager/barbican_cert_manager.py:180
2016-09-20 18:16:08.645 25054 DEBUG barbicanclient.client...

Read more...

Revision history for this message
Jiahao liang (jiahao.liang) wrote :
Download full text (3.6 KiB)

barbican-svc.log:

2016-09-20 18:16:08.589 4594 INFO barbican.api.controllers.consumers [req-a6d0b546-588a-4b66
-b4e5-46e383e212d2 57600ccd1a31429a897d45709a3c1035 18b17f7d621b44c083a04518d256ed3d - default default] Created a consumer for project: 18b17f7d621b44c083a04518d256ed3d
2016-09-20 18:16:08.643 4594 INFO barbican.api.middleware.context [req-a6d0b546-588a-4b66-b4e5-46e383e212d2 57600ccd1a31429a897d45709a3c1035 18b17f7d621b44c083a04518d256ed3d - default default] Processed request: 200 OK - POST http://192.168.200.43:9311/v1/containers/2817e144-9f11-4bed-a14e-0390edf89659/consumers/
{address space usage: 220184576 bytes/209MB} {rss usage: 105312256 bytes/100MB} [pid: 4594|app: 0|req: 103/103] 192.168.200.43 () {34 vars in 609 bytes} [Tue Sep 20 18:16:08 2016] POST /v1/containers/2817e144-9f11-4bed-a14e-0390edf89659/consumers/ => generated 647 bytes in 211 msecs (HTTP/1.1 200) 5 headers in 270 bytes (1 switches on core 0)
2016-09-20 18:16:08.682 4594 INFO barbican.api.controllers.secrets [req-e89461f1-04be-4321-a379-251bc9b872db 57600ccd1a31429a897d45709a3c1035 18b17f7d621b44c083a04518d256ed3d - default default] Retrieved secret metadata for project: 18b17f7d621b44c083a04518d256ed3d
2016-09-20 18:16:08.684 4594 INFO barbican.api.middleware.context [req-e89461f1-04be-4321-a379-251bc9b872db 57600ccd1a31429a897d45709a3c1035 18b17f7d621b44c083a04518d256ed3d - default default] Processed request: 200 OK - GET http://192.168.200.43:9311/v1/secrets/469fe858-44cc-431d-9c7c-a6d7936ed56c
{address space usage: 220184576 bytes/209MB} {rss usage: 105312256 bytes/100MB} [pid: 4594|app: 0|req: 104/104] 192.168.200.43 () {30 vars in 541 bytes} [Tue Sep 20 18:16:08 2016] GET /v1/secrets/469fe858-44cc-431d-9c7c-a6d7936ed56c => generated 396 bytes in 36 msecs (HTTP/1.1 200) 4 headers in 172 bytes (1 switches on core 0)
2016-09-20 18:16:08.723 4594 ERROR barbican.api.controllers [req-a004f2a0-3373-4e30-a70c-71b160200c18 57600ccd1a31429a897d45709a3c1035 18b17f7d621b44c083a04518d256ed3d - default default] Secret payload retrieval attempt not allowed - please review your user/project privileges
2016-09-20 18:16:08.724 4594 INFO barbican.api.middleware.context [req-a004f2a0-3373-4e30-a70c-71b160200c18 57600ccd1a31429a897d45709a3c1035 18b17f7d621b44c083a04518d256ed3d - default default] Processed request: 403 Forbidden - GET http://192.168.200.43:9311/v1/secrets/469fe858-44cc-431d-9c7c-a6d7936ed56c/payload
{address space usage: 220581888 bytes/210MB} {rss usage: 105684992 bytes/100MB} [pid: 4594|app: 0|req: 105/105] 192.168.200.43 () {30 vars in 551 bytes} [Tue Sep 20 18:16:08 2016] GET /v1/secrets/469fe858-44cc-431d-9c7c-a6d7936ed56c/payload => generated 143 bytes in 37 msecs (HTTP/1.1 403) 4 headers in 179 bytes (1 switches on core 0)
2016-09-20 18:16:08.794 4594 INFO barbican.api.controllers.consumers [req-c7a9c5e3-77a3-4198-bacd-9941d5547a2f 57600ccd1a31429a897d45709a3c1035 18b17f7d621b44c083a04518d256ed3d - default default] Deleted a consumer for project: 18b17f7d621b44c083a04518d256ed3d
2016-09-20 18:16:08.835 4594 INFO barbican.api.middleware.context [req-c7a9c5e3-77a3-4198-bacd-9941d5547a2f 57600ccd1a31429a897d45709a3c1035 18b17f7d621b44c083a04518d256ed3d ...

Read more...

summary: - TLS container could not be found
+ LBaaS TLS is not working with non-admin tenant
tags: added: lbaas
removed: lbaasv2
Revision history for this message
Joris S'heeren (jsheeren) wrote : Re: LBaaS TLS is not working with non-admin tenant

Hi,

I'm seeing the same thing as in comment #5 (https://bugs.launchpad.net/barbican/+bug/1592612/comments/5)

Neutron fails with: ERROR neutron.api.v2.resource TLSContainerInvalid: TLS container https://openstackserver:9311/v1/containers/be8757c8-0ce4-49d8-88ed-5195048257af is invalid. Forbidde

Neutron log can be found at http://paste.openstack.org/show/583101/

The barbican log can be found at http://paste.openstack.org/show/583102/

Using barbican 3.0.0.0rc2.dev5

Kind regards,
Joris

Revision history for this message
Jiahao liang (jiahao.liang) wrote :

I believe this bug is related to two other bugs Octavia/Neutron LBaaS reported.

https://bugs.launchpad.net/barbican/+bug/1627389
https://bugs.launchpad.net/barbican/+bug/1627391

Revision history for this message
Johannes Grassler (jgr-launchpad) wrote :

The problem happens in these three places:

https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/agent/agent_manager.py#L300
common/cert_manager/barbican_auth/barbican_acl.py
https://github.com/openstack/neutron-lbaas/blob/stable/mitaka/neutron_lbaas/common/keystone.py#L90

The first place is where the user's context is not handed down any more. The second place is where a hardwired admin context comes into play (i.e. the one returned by get_session()). The third link is the function that creates the hardwired context. As you can see from the attached stack trace (excerpted from neutron-lbaas.log), the path to that location is a bit long.

The LBaaS plugin is stuck with whatever credentials it is configured to use. Unless this configuration is changed from the default that's the admin user/admin tenant. For this to work sanely in a multi tenant environment, get_session() would have to return a session for the user who is currently talking to the Neutron API. This would render scary hacks (have the user explicitly grant access to their secret to the admin user) such as the one in the inital post on

https://bugs.launchpad.net/barbican/+bug/1627391

entirely unneccessary.

A fix along those lines is probably a bit longish, since the user context would have to be relayed through every driver in the neutron_lbaas.driver name space (I only checked for the haproxy driver that we used for testing, but I presume it's no different for the others). I tested this on stable/mitaka, but from a cursory glance at master the problem is the same in master: the Barbican key manager still uses get_session(), and get_session() still uses the static credentials from neutron.conf.

If the fix as outlined is acceptable, I can work on it, but I probably won't get to it until a little after the summit (of course I'd be happy to backport the fix to stable/newton).

Revision history for this message
Dave McCowan (dave-mccowan) wrote :

@Johannes, thanks for the debugging and detailed description. I like your approach, but I'm concerned about backwards compatibility. Your suggestion might be the best solution if we were starting from scratch. Since there might be existing users that count on LBaaS using the configured service or admin user, this change may break them. Maybe we can add a configuration option for Okata? Or maybe this can be a bug fix. Hopefully we can get some discussion going on this. I will bring it up at today's Barbican IRC meeting.

Revision history for this message
Johannes Grassler (jgr-launchpad) wrote :

Sorry, I missed the comment and thus the IRC meeting. I had a look at the log[0] and I'd like to clarify some things, I probably didn't get across all that well:

I'm not proposing to save the user context. I propose to use it in neutron_lbaaas. If a user creates a secret container, and that same user then creates a load balancer, neutron_lbaas can use that user's authenticated context passed from the Neutron API/middleware to neutron_lbaas (rather than the admin user's context) to retrieve the container in question.

That being said, after I've read the log I realized that's not really feasible. It does not allow this kind of access the container long after the authenticated session's token has expired:

  20:18:12 <sbalukoff> So, part of the problem here lies in the fact that LBaaS / Octavia may need
  to access the secrets at a time when the user isn't actively deploying a load balancer that
  requires a secret. That is to say, in a fail-over situation, LBaaS / Octavia will need to access
  the secret.

So there is in fact a need to have some sort of long-term access to the container. That can be done without ACLs, though. The "standard" (as in: multiple projects use it) mechanism for this is Keystone trusts. Namely, Heat (see [1] for an in-depth description) and Magnum use these for deferred operations. They're not the ideal least-privilege solution (in my book, that ideal would be a token that only allows access to _one_ specific resource), but they are far more restrictive than granting the Neutron service user access to any or all of a user's secrets.

As for further discussion at the Summit: I'll be there from Tuesday morning until early Friday afternoon, and I'd be happy to meet up for further discussion. I'll give some thought to implementing this in a backwards compatible manner in the mean time.

[0] http://eavesdrop.openstack.org/meetings/barbican/2016/barbican.2016-10-10-20.00.log.txt
[1] http://hardysteven.blogspot.de/2014/04/heat-auth-model-updates-part-1-trusts.html

Changed in octavia:
importance: Undecided → High
Revision history for this message
Michael Johnson (johnsom) wrote :

At first pass reading through this it seems like the right answer is to use the token received with the "neutron lbaas-listener-create" call that defines the containers for LBaaS and use that token to authorize the configured neutron/Octavia account to access the container.

Does this sound right?

Does the barbican ACL API support this use case?

Revision history for this message
Dave McCowan (dave-mccowan) wrote :

@michael, I like the idea, but it would be a change of behavior. This could be config option. barbican_user: (admin or passthrough). When "admin" is set then barbican_admin_user and barbican_admin_password is configured (current behavior). When "passthrough" is set, then the user's token from "neuton lbaas-listener-create" is passed through. In either case, the user will need to have permissions to the secrets. This can be done via Barbican ACL or via the user having role and project matching the secret.

Revision history for this message
Johannes Grassler (jgr-launchpad) wrote :

Using the token received with the "neutron lbaas-listener-create" directly only solves part of the problem: access to the secret container upon loadbalancer creation. If and when a failover occurs (as I understood sbalukoff's comment on IRC, neutron_lbaas will handle that without user intervention), that token is likely to have expired. In this situtation, a Keystone trust created using the intial token can still be used, though.

Changed in neutron:
status: New → Confirmed
importance: Undecided → High
Revision history for this message
Michael Johnson (johnsom) wrote :

My proposal was not to store and continue to use the user keystone token, but to use the user keystone token at the time they create the LBaaS listener (provide us the container references) to grant the LBaaS service account ACL access to the conatiner(s) and content.

Using the user token beyond this initial setup would not work due to the expiration issues discussed above.

Revision history for this message
Subrahmanyam Ongole (osms69) wrote :

Would there be any security violations, if we automatically grant ACL access to LBaaS service account on behalf of the user? If we do this, LBaaS account has access to all user secrets.

Revision history for this message
Stephen Balukoff (sbalukoff) wrote :

That would put the control of access to the secrets in the hands of Octavia itself. Michael can speak to whether he thinks this is a good idea, though I don't see anything wrong with it. Note that in order for Octavia to ensure that secrets are not shared across projects, Octavia needs to know the secret's project_id. Presently the barbican API doesn't list the secret's project_id when the meta-data is accessed. I've opened an RFE bug which would solve this problem for us, and allow Octavia (and other 3rd party services) to ensure that secrets are not shared across projects: https://bugs.launchpad.net/barbican/+bug/1629511

Revision history for this message
Michael Johnson (johnsom) wrote :

To my knowledge we can grant ACL access to just the container the user is requesting we use for the listener creation, so we would not be granting the LBaaS service account access to all of the user's secrets, but just the ones that user is requesting we use for the listener.
Is that a mis-understanding?

Changed in octavia:
status: New → Confirmed
no longer affects: neutron
Revision history for this message
Dave McCowan (dave-mccowan) wrote :

@michael: yes. it works this way, except you need to grant access to the container and each secret individually. (Three or four ACL post commands in total.) The only complication is that the user needs to know the UUID of the listener to make the post commands.

Changed in barbican:
assignee: nobody → Douglas Mendizábal (dougmendizabal)
Revision history for this message
Dave McCowan (dave-mccowan) wrote :

Cascading ACL permissions would make this easier to use.

Changed in barbican:
status: New → Triaged
importance: Undecided → Wishlist
summary: - LBaaS TLS is not working with non-admin tenant
+ LBaaS TLS with non-admin tenant is user unfriendly
Revision history for this message
Ahmed Ezzat Douban (doubando) wrote :

@Dave, i have the same issue, how do i get the UUID of the listener, is it the ID of the project user used to create the listener ?

thanks in advance

Revision history for this message
Gregory Thiemonge (gthiemonge) wrote : auto-abandon-script

Abandoned after re-enabling the Octavia launchpad.

Changed in octavia:
status: Confirmed → Invalid
tags: added: auto-abandon
Revision history for this message
Grzegorz Grasza (xek) wrote :

Closing out bugs created before migration to StoryBoard. Please re-open if you are of the opinion it is still current.

Changed in barbican:
status: Triaged → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.