Comment 1 for bug 1696834

Revision history for this message
Chris Dent (cdent) wrote :

I spent a bit of time looking into this, and though I couldn't figure out why it was happening I got closer. Elsewhere in the logs <http://logs.openstack.org/87/472287/1/gate/gate-nova-tox-functional-py35-ubuntu-xenial/79cb96d/console.html#_2017-06-08_17_42_12_498469> we get things like this:

2017-06-08 17:42:12.498469 | b'2017-06-08 17:38:27,167 ERROR [nova.scheduler.client.report] Failed to retrieve filtered list of resource providers from placement API for filters {\'resources\': \'DISK_GB:20,MEMORY_MB:2048,VCPU:1\'}. Got 300: {"choices": [{"id": "v2.0", "status": "SUPPORTED", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.compute+json;version=2"}], "links": [{"rel": "self", "href": "http://127.0.0.1:41392/v2/resource_providers"}]}, {"id": "v2.1", "status": "CURRENT", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1"}], "links": [{"rel": "self", "href": "http://127.0.0.1:41392/v2.1/resource_providers"}]}]}.'

That's the output from the nova/api/openstack/compute/versions.py Versions.multi method which a request to the placement service should never see, so somehow things have been confused in the fixtures such that requests that are supposed to be destined for the placement fixture are ending up on the compute fixture. You can see from elsewhere in the logs <http://logs.openstack.org/87/472287/1/gate/gate-nova-tox-functional-py35-ubuntu-xenial/79cb96d/console.html#_2017-06-08_17_42_12_488625> that a request gets a 200 and logs as nova.placement.wsgi.server but then is immediate followed by a 300 and logs as nova.osapi_compute.wsgi.server

wat?

So recording this for posterity in case anyone has some some ideas. The wsgi service fixtures aren't something I'm too familiar with. If I had to guess, eventlet is being odd.