Comment 4 for bug 1790204

Revision history for this message
Matt Riedemann (mriedem) wrote :

Looking back over the history on this, it's my fault it looks like:

https://review.openstack.org/#/c/490085/

It also looks like we had some conversation about doing a max of the allocations rather than a sum:

https://review.openstack.org/#/c/490085/1/nova/tests/unit/scheduler/client/test_report.py@533

And decided "do the sum since it's simpler". Well, that's great. :)

There was also a note added to the code that dealt with this in pike:

https://review.openstack.org/#/c/490085/7/nova/scheduler/client/report.py@224

"""
# Note that we sum the allocations rather than take the max per
# resource class between the current and new allocations because
# the compute node/resource tracker is going to adjust for
# decrementing any old allocations as necessary, the scheduler
# shouldn't make assumptions about that.
"""

I believe that's referring to the fact that in Pike, if you still had at least one Ocata compute in the deployment, the resource tracker in the nova-compute service would delete the old allocations, or at least overwrite the allocations using the new flavor, so it sort of healed itself. However, once all of your computes are upgraded that is no longer true (the compute doesn't PUT allocations anymore since that's the job of the scheduler). Additionally, we have since dropped that old compute compatibility code.