Comment 24 for bug 1503312

Revision history for this message
Anithra P Janakiraman (janithra) wrote :

We still see a problem with memcache when using more than one keystone node/memcache server. There is no issue with the db.
Configuration:
[cache]
backend = keystone.cache.memcache_pool
enabled = true
memcache_servers = 10.x.y.1:11211, 10.x.y.2:11211, 10.x.y.3:11211
expiration_time=1200
When we enable caching, we see that when a token is revoked in one node, a subsequent check_token request on that node results in the tree being rebuilt on that node... but the other keystone nodes allow some tokens to be checked before a rebuild of the tree. This seems like a sync issue. Pasting some output from the logs on two nodes , (these are custom logs added to
check when an entry is revoked/tree is rebuilt). In Node 2 one token has been checked without the tree being rebuilt but a subsequent request causes the tree to be rebuilt. Both nodes have the same time.

Node 1:

2016-03-21 10:38:27.193 28085 INFO keystone.common.wsgi [req-2891e182-c25f-4c4a-8bc1-0d3fc3532c2f - - - - -] DELETE https://keystone.service.os:35357/v2.0/users/10f0105c591f4d92816165146d3e9664
2016-03-21 10:38:27.205 28086 WARNING keystone.contrib.revoke.core [req-9c15cfbf-831a-4be2-a7e9-cedc69e04ceb - - - - -] CUSTOM: in revoke
2016-03-21 10:38:27.208 28086 WARNING keystone.contrib.revoke.core [req-39b18eb1-291f-46a0-a807-8fdb3b7ba86d - - - - -] CUSTOM: in revoke
2016-03-21 10:38:27.223 28086 INFO keystone.common.kvs.core [req-9c15cfbf-831a-4be2-a7e9-cedc69e04ceb - - - - -] Using default dogpile sha1_mangle_key as KVS region token-driver key_mangler

Node 2::

2016-03-21 10:38:27.206 6478 WARNING keystone.contrib.revoke.core [req-b215ab41-b0d8-433e-a971-a34b45a7cf72 - - - - -] CUSTOM : in check token
2016-03-21 10:38:27.206 6478 WARNING keystone.contrib.revoke.core [req-b215ab41-b0d8-433e-a971-a34b45a7cf72 - - - - -] {'access_token_id': None, 'project_id': u'bdf5fb89fcaa4faa898d3157b0e6785b', 'user_id': u'de3cebf70675407c894e4574b963a1ac', 'roles': [u'9fe2ff9ee4384b1894a90878d3e92bab', u'1d57a2ba441c4bccb25a85ab0a11ce95'], 'audit_id': 'ovDmHMhgRtm70cmuS6ASIw', 'trustee_id': None, 'trustor_id': None, 'expires_at': datetime.datetime(2016, 3, 21, 11, 38, 26), 'consumer_id': None, 'assignment_domain_id': u'default', 'issued_at': datetime.datetime(2016, 3, 21, 10, 38, 27), 'identity_domain_id': u'default', 'audit_chain_id': 'ovDmHMhgRtm70cmuS6ASIw', 'trust_id': None}
2016-03-21 10:38:27.210 6478 INFO keystone.common.wsgi [req-b215ab41-b0d8-433e-a971-a34b45a7cf72 - - - - -] DELETE https://keystone.service.os:35357/v2.0/users/2ee5ceda37ba42e5b540fece70203451
2016-03-21 10:38:27.237 6479 WARNING keystone.contrib.revoke.core [req-18928f40-1653-4b24-80e0-bea065ce0dfa - - - - -] CUSTOM: in check token
2016-03-21 10:38:27.237 6479 WARNING keystone.contrib.revoke.core [req-18928f40-1653-4b24-80e0-bea065ce0dfa - - - - -] {'access_token_id': None, 'project_id': u'bdf5fb89fcaa4faa898d3157b0e6785b', 'user_id': u'de3cebf70675407c894e4574b963a1ac', 'roles': [u'9fe2ff9ee4384b1894a90878d3e92bab', u'1d57a2ba441c4bccb25a85ab0a11ce95'], 'audit_id': 'aCJFgymrQa6yjKv4CuIQgw', 'trustee_id': None, 'trustor_id': None, 'expires_at': datetime.datetime(2016, 3, 21, 11, 38, 26), 'consumer_id': None, 'assignment_domain_id': u'default', 'issued_at': datetime.datetime(2016, 3, 21, 10, 38, 27), 'identity_domain_id': u'default', 'audit_chain_id': 'aCJFgymrQa6yjKv4CuIQgw', 'trust_id': None}
2016-03-21 10:38:27.239 6479 WARNING keystone.contrib.revoke.backends.sql [req-18928f40-1653-4b24-80e0-bea065ce0dfa - - - - -] CUSTOM: Rebuilding tree , in list_events
2016-03-21 10:38:27.243 6479 WARNING keystone.contrib.revoke.core [req-18928f40-1653-4b24-80e0-bea065ce0dfa - - - - -] CUSTOM: rebuilding tree

I guess this is more a bug in memcache than in keystone, so we will be changing the status to incomplete.