Comment 51 for bug 1446177

Revision history for this message
Stefan Bader (smb) wrote :

The problem now is that because this went forth and back and sideways at least I am now quite confused. :(

1. numa info returned by nodeinfo incorrect
  -> 15.04. ok, before that incorrect for some Supermicro boards

2. numa information not automatically used
  -> from my experiments and internet search this never was working and is not working
      either in upcoming 15.10 as it would require numad available in the distro (at build
      of libvirt and on the virt host).

For OpenStack nova [1] there was some work done for their Juno series. That possibly did not get into 14.04 but also sounds independent to what standalone libvirt would or could do.

3. Manual tuning for numa appears to be possible and working even in 14.04. But since my
    nodeinfo is ok, I cannot say whether that really affects numa tuning. Some comments in
    the code sound like the info that matters would be the capabilities one. Which according
    to some comments here is ok even if nodeinfo is not.

So for memory, would numatune on a running but not tuned guest return a range covering the correct available set of nodes? Which then can be tuned (with --config) to be limited to a defined node. That only works the next time the related qemu task is started (so probably needs a shutdown + start). Can be verified with numastat.

For the VCPUs a cpuset can be added to the vcpu xml element. Unfortunately there does not seem to be a command doing so. Which is inconvenient. One could use vcpupin but that needs to be done for each vcpu individually. With vcpuinfo this can be verified.

So if manual memory or cpu pinning is broken in a supported release, this would be important to fix (though should get a different bug report to keep confusion low). For the nodeinfo part it depends on whether this has influence on the actual tuning. If it has not it still would be good to resolve it but less urgent.

I suspect the memory pinning is the part that is more likely the problem as vcpu pinning does not really care about nodes for its function. And the way memory pinning is done only looks to work when the qemu process gets started within a contrainted memory cgroup. So modifying a running guest would have no effect but that consistently on all releases as that seems to depend on numad.

[1] https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement