We still see a problem with memcache when using more than one keystone node/memcache server. There is no issue with the db.
Configuration:
[cache]
backend = keystone.cache.memcache_pool
enabled = true
memcache_servers = 10.x.y.1:11211, 10.x.y.2:11211, 10.x.y.3:11211
expiration_time=1200
When we enable caching, we see that when a token is revoked in one node, a subsequent check_token request on that node results in the tree being rebuilt on that node... but the other keystone nodes allow some tokens to be checked before a rebuild of the tree. This seems like a sync issue. Pasting some output from the logs on two nodes , (these are custom logs added to
check when an entry is revoked/tree is rebuilt). In Node 2 one token has been checked without the tree being rebuilt but a subsequent request causes the tree to be rebuilt. Both nodes have the same time.
Node 1:
2016-03-21 10:38:27.193 28085 INFO keystone.common.wsgi [req-2891e182-c25f-4c4a-8bc1-0d3fc3532c2f - - - - -] DELETE https://keystone.service.os:35357/v2.0/users/10f0105c591f4d92816165146d3e9664
2016-03-21 10:38:27.205 28086 WARNING keystone.contrib.revoke.core [req-9c15cfbf-831a-4be2-a7e9-cedc69e04ceb - - - - -] CUSTOM: in revoke
2016-03-21 10:38:27.208 28086 WARNING keystone.contrib.revoke.core [req-39b18eb1-291f-46a0-a807-8fdb3b7ba86d - - - - -] CUSTOM: in revoke
2016-03-21 10:38:27.223 28086 INFO keystone.common.kvs.core [req-9c15cfbf-831a-4be2-a7e9-cedc69e04ceb - - - - -] Using default dogpile sha1_mangle_key as KVS region token-driver key_mangler
We still see a problem with memcache when using more than one keystone node/memcache server. There is no issue with the db. cache.memcache_ pool time=1200
Configuration:
[cache]
backend = keystone.
enabled = true
memcache_servers = 10.x.y.1:11211, 10.x.y.2:11211, 10.x.y.3:11211
expiration_
When we enable caching, we see that when a token is revoked in one node, a subsequent check_token request on that node results in the tree being rebuilt on that node... but the other keystone nodes allow some tokens to be checked before a rebuild of the tree. This seems like a sync issue. Pasting some output from the logs on two nodes , (these are custom logs added to
check when an entry is revoked/tree is rebuilt). In Node 2 one token has been checked without the tree being rebuilt but a subsequent request causes the tree to be rebuilt. Both nodes have the same time.
Node 1:
2016-03-21 10:38:27.193 28085 INFO keystone. common. wsgi [req-2891e182- c25f-4c4a- 8bc1-0d3fc3532c 2f - - - - -] DELETE https:/ /keystone. service. os:35357/ v2.0/users/ 10f0105c591f4d9 2816165146d3e96 64 contrib. revoke. core [req-9c15cfbf- 831a-4be2- a7e9-cedc69e04c eb - - - - -] CUSTOM: in revoke contrib. revoke. core [req-39b18eb1- 291f-46a0- a807-8fdb3b7ba8 6d - - - - -] CUSTOM: in revoke common. kvs.core [req-9c15cfbf- 831a-4be2- a7e9-cedc69e04c eb - - - - -] Using default dogpile sha1_mangle_key as KVS region token-driver key_mangler
2016-03-21 10:38:27.205 28086 WARNING keystone.
2016-03-21 10:38:27.208 28086 WARNING keystone.
2016-03-21 10:38:27.223 28086 INFO keystone.
Node 2::
2016-03-21 10:38:27.206 6478 WARNING keystone. contrib. revoke. core [req-b215ab41- b0d8-433e- a971-a34b45a7cf 72 - - - - -] CUSTOM : in check token contrib. revoke. core [req-b215ab41- b0d8-433e- a971-a34b45a7cf 72 - - - - -] {'access_token_id': None, 'project_id': u'bdf5fb89fcaa4 faa898d3157b0e6 785b', 'user_id': u'de3cebf706754 07c894e4574b963 a1ac', 'roles': [u'9fe2ff9ee438 4b1894a90878d3e 92bab', u'1d57a2ba441c4 bccb25a85ab0a11 ce95'], 'audit_id': 'ovDmHMhgRtm70c muS6ASIw' , 'trustee_id': None, 'trustor_id': None, 'expires_at': datetime. datetime( 2016, 3, 21, 11, 38, 26), 'consumer_id': None, 'assignment_ domain_ id': u'default', 'issued_at': datetime. datetime( 2016, 3, 21, 10, 38, 27), 'identity_ domain_ id': u'default', 'audit_chain_id': 'ovDmHMhgRtm70c muS6ASIw' , 'trust_id': None} common. wsgi [req-b215ab41- b0d8-433e- a971-a34b45a7cf 72 - - - - -] DELETE https:/ /keystone. service. os:35357/ v2.0/users/ 2ee5ceda37ba42e 5b540fece702034 51 contrib. revoke. core [req-18928f40- 1653-4b24- 80e0-bea065ce0d fa - - - - -] CUSTOM: in check token contrib. revoke. core [req-18928f40- 1653-4b24- 80e0-bea065ce0d fa - - - - -] {'access_token_id': None, 'project_id': u'bdf5fb89fcaa4 faa898d3157b0e6 785b', 'user_id': u'de3cebf706754 07c894e4574b963 a1ac', 'roles': [u'9fe2ff9ee438 4b1894a90878d3e 92bab', u'1d57a2ba441c4 bccb25a85ab0a11 ce95'], 'audit_id': 'aCJFgymrQa6yjK v4CuIQgw' , 'trustee_id': None, 'trustor_id': None, 'expires_at': datetime. datetime( 2016, 3, 21, 11, 38, 26), 'consumer_id': None, 'assignment_ domain_ id': u'default', 'issued_at': datetime. datetime( 2016, 3, 21, 10, 38, 27), 'identity_ domain_ id': u'default', 'audit_chain_id': 'aCJFgymrQa6yjK v4CuIQgw' , 'trust_id': None} contrib. revoke. backends. sql [req-18928f40- 1653-4b24- 80e0-bea065ce0d fa - - - - -] CUSTOM: Rebuilding tree , in list_events contrib. revoke. core [req-18928f40- 1653-4b24- 80e0-bea065ce0d fa - - - - -] CUSTOM: rebuilding tree
2016-03-21 10:38:27.206 6478 WARNING keystone.
2016-03-21 10:38:27.210 6478 INFO keystone.
2016-03-21 10:38:27.237 6479 WARNING keystone.
2016-03-21 10:38:27.237 6479 WARNING keystone.
2016-03-21 10:38:27.239 6479 WARNING keystone.
2016-03-21 10:38:27.243 6479 WARNING keystone.
I guess this is more a bug in memcache than in keystone, so we will be changing the status to incomplete.