Libvirt doesn't report used memory correctly

Bug #1078348 reported by Huang Zhiteng
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
High
Rafi Khardalian

Bug Description

Background: KVM's memory allocation behaviour: only allocates memory when guest touch that part of memory - this is *different* from memory ballooning which hypervisor is aware of what's going on and have control.
'Used' memory, from nova scheduler point of view, is allocated memory for guests/instances, no matter the guests/instances use them or not.

Because of this behavior, currently how libvirt+KVM reports 'used' memory (https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2038) usually is less than allocated memory for all guest. Thus scheduler would schedule based on false information, this can be a problem when guests become active (use much more memory).

Changed in nova:
status: New → Confirmed
importance: Undecided → High
importance: High → Critical
milestone: none → grizzly-1
importance: Critical → High
Changed in nova:
assignee: nobody → Rafi Khardalian (rkhardalian)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/16036

Changed in nova:
status: Confirmed → In Progress
Revision history for this message
Brian Elliott (belliott) wrote :

It doesn't matter what value of memory_mb_used gets reported to scheduler. The resource tracker is going to ignore it and calculated the usage based off of the total amount of memory allocated to VMs on that host.

The scheduler will not rely on the value reported by libvirt.

Changed in nova:
status: In Progress → Invalid
Thierry Carrez (ttx)
Changed in nova:
milestone: grizzly-1 → none
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.