Comment 3 for bug 1781710

Revision history for this message
Matt Riedemann (mriedem) wrote :

> Yes, it seems we need refresh host_state.instances[1] and spec_obj.instance_group.members[2] in
> claim for next server.
> [1] https://review.openstack.org/#/c/571166/27/nova/scheduler/filters/affinity_filter.py@101
> [2] https://review.openstack.org/#/c/571166/27/nova/scheduler/filters/affinity_filter.py@103

HostState.instances should be updated when the FilterScheduler calls HostState.consume_from_request:

https://github.com/openstack/nova/blob/21a368e1a6f22aa576719ec463d13280b9178f10/nova/scheduler/filter_scheduler.py#L324

However, looking at this it doesn't look like it does:

https://github.com/openstack/nova/blob/21a368e1a6f22aa576719ec463d13280b9178f10/nova/scheduler/host_manager.py#L276

And the InstanceGroup.members should already have both instances in the members list by the time we get to the scheduler because they are added to the group in the API:

https://github.com/openstack/nova/blob/21a368e1a6f22aa576719ec463d13280b9178f10/nova/compute/api.py#L948

> But my confusion is: Why compute didn't check the 2nd server as a overhead server in
> _validate_instance_group_policy of compute manager[3].
> [3] https://review.openstack.org/#/c/571465/26/nova/compute/manager.py@1293

That's because it requires an up-call to the API DB to get the group information and by default configuration, devstack disables that via this config option:

https://docs.openstack.org/nova/latest/configuration/config.html#workarounds.disable_group_policy_check_upcall