Looking at the actual error in the logs makes me think that we need to increase the max_over_subscription_ratio configured in the gate, create a bigger backend, or reduce the speed at which we create volumes.
That or we stop deducting the full volume size in the scheduler from free_capacity_gb in the Cinder scheduler for thin provisioned volumes.
On a side note, there seems to be some other issues in the Cinder Scheduler:
- We can get to negative numbers for free_capacity_gb
- We are logging some incoherent things:
Looking at req-8cd996a1-3662-44ab-9ca2-593aa84833b3 in one of the failures [1] we would assume that the backend has more than enough space for the volume from the logged information at Mar 16 13:26:41.201266:
Looking at the actual error in the logs makes me think that we need to increase the max_over_ subscription_ ratio configured in the gate, create a bigger backend, or reduce the speed at which we create volumes.
That or we stop deducting the full volume size in the scheduler from free_capacity_gb in the Cinder scheduler for thin provisioned volumes.
On a side note, there seems to be some other issues in the Cinder Scheduler:
- We can get to negative numbers for free_capacity_gb
- We are logging some incoherent things:
Looking at req-8cd996a1- 3662-44ab- 9ca2-593aa84833 b3 in one of the failures [1] we would assume that the backend has more than enough space for the volume from the logged information at Mar 16 13:26:41.201266:
{'provisioned_ capacity_ gb': 1.0, backend_ name': 'lvmdriver-1', r:ubuntu- xenial- rax-ord- 0003018294: stack-volumes- lvmdriver- 1:thin: 0', capacity_ gb': 22.76, capacity_ gb': 7, percentage' : 0, subscription_ ratio': '20.0', capacity_ gb': 22.8}
'volume_
'location_info': 'LVMVolumeDrive
'free_
'allocated_
'total_volumes': 2,
'pool_name': 'lvmdriver-1',
'reserved_
'max_over_
'total_
But that's not the actual data that is used for filtering as can be seen in the log entry of that same request at Mar 16 13:26:41.202743:
Backend: host 'ubuntu- xenial- rax-ord- 0003018294@ lvmdriver- 1#lvmdriver- 1' 99844, capacity_ gb: 30 subscription_ ratio: 20.0 percentage: 0 capacity_ gb: 24.0
free_capacity_gb: -0.239999999999
total_capacity_gb: 22.8
allocated_
max_over_
reserved_
provisioned_
[1] http:// logs.openstack. org/67/ 550967/ 4/check/ tempest- full-py3/ c67e31b/ controller/ logs/screen- c-sch.txt