And the InstanceGroup.members should already have both instances in the members list by the time we get to the scheduler because they are added to the group in the API:
That's because it requires an up-call to the API DB to get the group information and by default configuration, devstack disables that via this config option:
> Yes, it seems we need refresh host_state. instances[ 1] and spec_obj. instance_ group.members[ 2] in /review. openstack. org/#/c/ 571166/ 27/nova/ scheduler/ filters/ affinity_ filter. py@101 /review. openstack. org/#/c/ 571166/ 27/nova/ scheduler/ filters/ affinity_ filter. py@103
> claim for next server.
> [1] https:/
> [2] https:/
HostState.instances should be updated when the FilterScheduler calls HostState. consume_ from_request:
https:/ /github. com/openstack/ nova/blob/ 21a368e1a6f22aa 576719ec463d132 80b9178f10/ nova/scheduler/ filter_ scheduler. py#L324
However, looking at this it doesn't look like it does:
https:/ /github. com/openstack/ nova/blob/ 21a368e1a6f22aa 576719ec463d132 80b9178f10/ nova/scheduler/ host_manager. py#L276
And the InstanceGroup. members should already have both instances in the members list by the time we get to the scheduler because they are added to the group in the API:
https:/ /github. com/openstack/ nova/blob/ 21a368e1a6f22aa 576719ec463d132 80b9178f10/ nova/compute/ api.py# L948
> But my confusion is: Why compute didn't check the 2nd server as a overhead server in instance_ group_policy of compute manager[3]. /review. openstack. org/#/c/ 571465/ 26/nova/ compute/ manager. py@1293
> _validate_
> [3] https:/
That's because it requires an up-call to the API DB to get the group information and by default configuration, devstack disables that via this config option:
https:/ /docs.openstack .org/nova/ latest/ configuration/ config. html#workaround s.disable_ group_policy_ check_upcall