Load of pre-upgrade qemu modules needs to avoid noexec

Bug #1913421 reported by Christian Ehrhardt 
26
This bug affects 3 people
Affects Status Importance Assigned to Milestone
qemu (Ubuntu)
Fix Released
Undecided
Unassigned
Bionic
Fix Released
Undecided
Christian Ehrhardt 
Focal
Fix Released
Undecided
Christian Ehrhardt 
Groovy
Won't Fix
Undecided
Unassigned
Hirsute
Fix Released
Undecided
Christian Ehrhardt 

Bug Description

[Impact]

 * An infrequent but annoying issue is QEMUs problem to not be able to
   hot-add capabilities IF since starting the instance qemu has been
   upgraded. This is due to qemu modules only working with exactly the
   same build.

 * The problem is that the path everyone (upstream+security) agreed
   to put the files in is mounted noexec by default in Ubuntu preventing
   to load the .so from there.

 * In new versions this is solved via a .mount unit which is great for
   transparency and control e.g. opt in/out of this. But for the SRU
   after backporting the mount unit at first it was decided that a rather
   simple "check and tmp-mount if needed" is more resilient, less complex
   (mount unit handling by systemd/dh* is vastly different across
    releases) and would have less regression risk for scenarios
   were the admin has already made the path non noexec.

[Test Case]

 I:
 * $ apt install uvtool-libvirt
   $ uvt-simplestreams-libvirt --verbose sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=bionic
   $ uvt-kvm create --password ubuntu lateload arch=amd64 release=bionic label=daily

cat > curldisk.xml << EOF
  <disk type='network' device='disk'>
    <driver name='qemu' type='raw'/>
    <source protocol="http" name="ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso">
            <host name="archive.ubuntu.com" port="80"/>
    </source>
    <target dev='vdc' bus='virtio'/>
    <readonly/>
  </disk>
EOF

# Here up or downgrade the installed packages, even a minor
# version or a rebuild of the same version
# Instead if you prefer (easier) you can run
  $ sudo apt install --reinstall qemu-block-extra

Next check if they appeared (action of the maintainer scripts)
in the /var/run/qemu/<version> directory and maybe also which mount point (and options) are backing it.
  $ find /var/run/qemu/
  $ findmnt /var/run/qemu/

# And then rm/mv the original .so files of qemu-block-extra
sudo mv /usr/lib/x86_64-linux-gnu/qemu/block-curl.so /root/block-curl.so.notherightplace

# Trying to load a .so now would after an upgrade fail as the old qemu can't load the build id

Without the fix this will now fail some way, e.g. on Focal with:

$ virsh attach-device lateload curldisk.xml
Reported issue happens on attach:
root@b:~# virsh attach-device lateload cdrom-curl.xml
error: Failed to attach device from curldisk.xml
error: internal error: unable to execute QEMU command 'blockdev-add': Unknown driver 'http'

That attach should work on >Focal and also one can also check files mapped into a process and we should see the /var/run/.. path being used now.
  $ sudo cat /proc/$(pidof qemu-system-x86_64)/maps | grep curl

The original file path:
7f619941b000-7f619941c000 rw-p 00005000 fc:01 258107 /usr/lib/x86_64-linux-gnu/qemu/block-curl.so

But since we moved that way before being loaded it should point to /run/qemu/... this time.

 II:
 * As it had issues in the first iteration of the fix worth a try

This sub-test is only in Focal and Hirsute, was not present in Bionic.
This should have preference over the other dirs (usual load as well as fallback path), therefore we do NOT remove the usual paths and check if it works, we keep them but check the loaded binary.
TL;DR Copy the .so to another place and load it from there:
 $ sudo cp /usr/lib/x86_64-linux-gnu/qemu/block-curl.so /tmp/
 $ QEMU_MODULE_DIR="/tmp/" qemu-system-x86_64 -nographic -cdrom https://cdimage.ubuntu.com/ubuntu-server/daily-live/current/impish-live-server-amd64.iso
 # Then in other console check if it loaded that
 $ $ sudo cat /proc/$(pidof qemu-system-x86_64)/maps | grep curl | grep r-xp
7f3ef7dc1000-7f3ef7e23000 r-xp 0000c000 fc:01 5481 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.6.0
7f3efa729000-7f3efa72c000 r-xp 00002000 fc:01 1086 /tmp/block-curl.so

We see the qemu block lib from the wanted path and other libs from the system as usual.

III:
remount /run with exec, we want to see that it does NOT create a new mountpoint in this case:

$ sudo mount -o remount,exec /run
$ findmnt -T /var/run
TARGET SOURCE FSTYPE OPTIONS
/run tmpfs tmpfs rw,nosuid,nodev,relatime,size=202588k,mode=755,inode64
Then upgrade and recheck
$ find /var/run/qemu/; findmnt -T /var/run/qemu
/var/run/qemu/
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-ssh.so
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-rbd.so
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-iscsi.so
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-curl.so
/var/run/qemu/README
/var/run/qemu/exec
TARGET SOURCE FSTYPE OPTIONS
/run tmpfs tmpfs rw,nosuid,nodev,relatime,size=202588k,mode=755,inode64

Important is that in this case it is still /run and not /run/qemu

[Regression Potential]

Via extensive discussion we tried to find the least regression-risk way
but still the most likely regression would be where administrators have
taken means to modify/prepare /run/qemu themselves which might now collide.

[Other Info]

In Focal there were a few more (effectively no-op) mistakes which are cleaned up by this as well. It did save gui modules (not present in bionic, not wrong in hirsute) that can not be late loaded, so there is no point in saving them. Furthermore it had (bad patch match) enabled the feature on the qemu-system-x86-xen builds which have no use-case for this.

---

This is a continuation of bug 1847361.

Since that is in Ubuntu and Debian we are:
- correctly saving the modules to those paths in /var/run/qemu.
- qemu tries to load from that path as fallback
- that works fine in containers running qemu/kvm

But there is an issue on non-container systems as /run usually is like this:

  tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3274920k,mode=755)

The important bit here is the "noexec" which is intentional (for security reasons), but prevents the loading of shared objects from that path.

The path is good for many reasons (it is auto-cleaned, upstream and Distros agreed to this one path, ...). Moving it to other places also quite likely might have unpredictable options.

In a discussion between Victor (thanks for all the pushign and inpot on this) and Marc (security POV) we have come to a solution that will make just the subpath that is owned by qemu to not have noexec set.

This bug shall track preparing this fix for Debian / Ubuntu and the latter SRu considerations on the same.

Related branches

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in qemu (Ubuntu Bionic):
status: New → Confirmed
Changed in qemu (Ubuntu Focal):
status: New → Confirmed
Changed in qemu (Ubuntu Groovy):
status: New → Confirmed
Changed in qemu (Ubuntu):
status: New → Confirmed
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :
Changed in qemu (Ubuntu):
status: Confirmed → In Progress
tags: added: server-next
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Build in PPA complete, testing ...

$ ll /var/run/qemu/
ls: cannot access '/var/run/qemu/': No such file or directory
$ sudo apt install --reinstall qemu-block-extra
$ ls -laFd /var/run/qemu; ls -laF /var/run/qemu; ls -laF /var/run/qemu/*
drwxr-xr-x 3 root root 60 Jan 27 20:02 /var/run/qemu/
total 0
drwxr-xr-x 3 root root 60 Jan 27 20:02 ./
drwxr-xr-x 32 root root 960 Jan 27 20:02 ../
drwxr-xr-x 2 root root 120 Jan 27 20:02 Debian_1_5.2+dfsg-3ubuntu1/
total 164
drwxr-xr-x 2 root root 120 Jan 27 20:02 ./
drwxr-xr-x 3 root root 60 Jan 27 20:02 ../
-rw-r--r-- 1 root root 38632 Jan 5 11:43 block-curl.so
-rw-r--r-- 1 root root 45160 Jan 5 11:43 block-iscsi.so
-rw-r--r-- 1 root root 35912 Jan 5 11:43 block-rbd.so
-rw-r--r-- 1 root root 40136 Jan 5 11:43 block-ssh.so

But noexec:
mount | grep run
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=203112k,mode=755)

On install the unit is enabled (postinst has the dusual dh_installsystemd snippet), but sadly it stays disabled. Even after reboot, despite the enabled config it stayed disabled.

To be clear once started (manually for now) it works fine
$ sudo systemctl start run-qemu.mount
$ systemctl status run-qemu.mount
● run-qemu.mount - Allow noexec to for late qemu module load after upgrades
     Loaded: loaded (/lib/systemd/system/run-qemu.mount; disabled; vendor preset: enabled)
     Active: active (mounted) since Wed 2021-01-27 20:09:45 UTC; 1s ago
      Where: /run/qemu
       What: tmpfs
      Tasks: 0 (limit: 2338)
     Memory: 24.0K
     CGroup: /system.slice/run-qemu.mount

Jan 27 20:09:45 h-qemu-modules systemd[1]: Mounting Allow noexec to for late qemu module load after upgrades...
Jan 27 20:09:45 h-qemu-modules systemd[1]: Mounted Allow noexec to for late qemu module load after upgrades.

$ mount | grep run
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=203112k,mode=755)
tmpfs on /run/qemu type tmpfs (rw,nosuid,nodev,relatime,mode=755)

And a reinstall now places files in there as it did before:

$ ls -laFd /var/run/qemu; ls -laF /var/run/qemu; ls -laF /var/run/qemu/*
drwxr-xr-x 3 root root 60 Jan 27 20:11 /var/run/qemu/
total 0
drwxr-xr-x 3 root root 60 Jan 27 20:11 ./
drwxr-xr-x 30 root root 880 Jan 27 20:09 ../
drwxr-xr-x 2 root root 120 Jan 27 20:11 Debian_1_5.2+dfsg-3ubuntu2~ppa2/
total 164
drwxr-xr-x 2 root root 120 Jan 27 20:11 ./
drwxr-xr-x 3 root root 60 Jan 27 20:11 ../
-rw-r--r-- 1 root root 38632 Jan 27 12:45 block-curl.so
-rw-r--r-- 1 root root 45160 Jan 27 12:45 block-iscsi.so
-rw-r--r-- 1 root root 35912 Jan 27 12:45 block-rbd.so
-rw-r--r-- 1 root root 40136 Jan 27 12:45 block-ssh.so

Seems I'm cursed with dh_installsystemd magice recently, maybe the .mount behaves differently in dh*. In any case the postinst really has no start section for it, so it can't work yet.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

open-vm-tools-desktop
dh_installsystemd -popen-vm-tools-desktop --restart-after-upgrade --no-stop-on-upgrade run-vmblock

=> Maintscripts
# Automatically added by dh_installsystemd/13.3.1ubuntu1
if [ "$1" = "configure" ] || [ "$1" = "abort-upgrade" ] || [ "$1" = "abort-deconfigure" ] || [ "$1" = "abort-remove" ] ; then
        # This will only remove masks created by d-s-h on package removal.
        deb-systemd-helper unmask 'run-vmblock\x2dfuse.mount' >/dev/null || true

        # was-enabled defaults to true, so new installations run enable.
        if deb-systemd-helper --quiet was-enabled 'run-vmblock\x2dfuse.mount'; then
                # Enables the unit on first installation, creates new
                # symlinks on upgrades if the unit file has changed.
                deb-systemd-helper enable 'run-vmblock\x2dfuse.mount' >/dev/null || true
        else
                # Update the statefile to add new symlinks (if any), which need to be
                # cleaned up on purge. Also remove old symlinks.
                deb-systemd-helper update-state 'run-vmblock\x2dfuse.mount' >/dev/null || true
        fi
fi
# End automatically added section
# Automatically added by dh_installsystemd/13.3.1ubuntu1
if [ "$1" = "configure" ] || [ "$1" = "abort-upgrade" ] || [ "$1" = "abort-deconfigure" ] || [ "$1" = "abort-remove" ] ; then
        if [ -d /run/systemd/system ]; then
                systemctl --system daemon-reload >/dev/null || true
                if [ -n "$2" ]; then
                        _dh_action=restart
                else
                        _dh_action=start
                fi
                deb-systemd-invoke $_dh_action 'run-vmblock\x2dfuse.mount' >/dev/null || true
        fi
fi
# End automatically added section

Qemu as I tried it atm:

dh_installsystemd -a -pqemu-system-common --restart-after-upgrade --no-stop-on-upgrade --name=run-qemu.mount

Only gets rendered into
# Automatically added by dh_installsystemd/13.3.1ubuntu1
if [ "$1" = "configure" ] || [ "$1" = "abort-upgrade" ] || [ "$1" = "abort-deconfigure" ] || [ "$1" = "abort-remove" ] ; then
        if deb-systemd-helper debian-installed 'run-qemu.mount'; then
                # This will only remove masks created by d-s-h on package removal.
                deb-systemd-helper unmask 'run-qemu.mount' >/dev/null || true

                if deb-systemd-helper --quiet was-enabled 'run-qemu.mount'; then
                        # Create new symlinks, if any.
                        deb-systemd-helper enable 'run-qemu.mount' >/dev/null || true
                fi
        fi

        # Update the statefile to add new symlinks (if any), which need to be cleaned
        # up on purge. Also remove old symlinks.
        deb-systemd-helper update-state 'run-qemu.mount' >/dev/null || true
fi
# End automatically added section

The start section is missing, but why?

Both are compat level 12, the commands are the same ... hmmm

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

This is as open-vm-tools, there it works :-/
--restart-after-upgrade --no-stop-on-upgrade

But no matter if I use either of the following for dh_installsystemd:
"--restart-after-upgrade --no-stop-on-upgrade"
"--no-stop-on-upgrade"
""

I always end up with:
/var/lib/dpkg/info/qemu-system-common.postinst: if deb-systemd-helper debian-installed 'run-qemu.mount'; then
/var/lib/dpkg/info/qemu-system-common.postinst: deb-systemd-helper unmask 'run-qemu.mount' >/dev/null || true
/var/lib/dpkg/info/qemu-system-common.postinst: if deb-systemd-helper --quiet was-enabled 'run-qemu.mount'; then
/var/lib/dpkg/info/qemu-system-common.postinst: deb-systemd-helper enable 'run-qemu.mount' >/dev/null || true
/var/lib/dpkg/info/qemu-system-common.postinst: deb-systemd-helper update-state 'run-qemu.mount' >/dev/null || true
/var/lib/dpkg/info/qemu-system-common.postrm: deb-systemd-helper mask 'qemu-kvm.service' 'run-qemu.mount' >/dev/null || true
/var/lib/dpkg/info/qemu-system-common.postrm: deb-systemd-helper purge 'qemu-kvm.service' 'run-qemu.mount' >/dev/null || true
/var/lib/dpkg/info/qemu-system-common.postrm: deb-systemd-helper unmask 'qemu-kvm.service' 'run-qemu.mount' >/dev/null || true

And that is neither good after install (no start action)
nor after reboot as it stays disabled:
 Loaded: loaded (/lib/systemd/system/run-qemu.mount; disabled; vendor preset: enabled)

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Here how open-vm-tools looks like:
/var/lib/dpkg/info/open-vm-tools-desktop.postinst: deb-systemd-helper unmask 'run-vmblock\x2dfuse.mount' >/dev/null || true
/var/lib/dpkg/info/open-vm-tools-desktop.postinst: if deb-systemd-helper --quiet was-enabled 'run-vmblock\x2dfuse.mount'; then
/var/lib/dpkg/info/open-vm-tools-desktop.postinst: deb-systemd-helper enable 'run-vmblock\x2dfuse.mount' >/dev/null || true
/var/lib/dpkg/info/open-vm-tools-desktop.postinst: deb-systemd-helper update-state 'run-vmblock\x2dfuse.mount' >/dev/null || true
/var/lib/dpkg/info/open-vm-tools-desktop.postinst: deb-systemd-invoke $_dh_action 'run-vmblock\x2dfuse.mount' >/dev/null || true
/var/lib/dpkg/info/open-vm-tools-desktop.postrm: deb-systemd-helper mask 'run-vmblock\x2dfuse.mount' >/dev/null || true
/var/lib/dpkg/info/open-vm-tools-desktop.postrm: deb-systemd-helper purge 'run-vmblock\x2dfuse.mount' >/dev/null || true
/var/lib/dpkg/info/open-vm-tools-desktop.postrm: deb-systemd-helper unmask 'run-vmblock\x2dfuse.mount' >/dev/null || true
/var/lib/dpkg/info/open-vm-tools-desktop.prerm: deb-systemd-invoke stop run-vmblock\\x2dfuse.mount >/dev/null || true
/var/lib/dpkg/info/open-vm-tools-desktop.prerm: deb-systemd-invoke stop 'run-vmblock\x2dfuse.mount' >/dev/null || true

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

     Loaded: loaded (/lib/systemd/system/run-vmblock\x2dfuse.mount; enabled; vendor preset: enabled)

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

LOL+ I had to major breakthroughs on this:
1. I found the right way in d/rules to get the .mount unit started
2. I had a great discussion about "the other POV" on this [1] and I must say that I agree.
   As much as this can be a comfort function it can also be
   a) less reasons to finally restart into upgraded code
   b) leave security vulnerable code around

For that I think we really want to make this available, but also NOT enabled by default.
As an opt-in that makes sense.

Current plan - I'll prep changes along this one here that does:
- install the .mount but NOT start/enable it (the admin has to opt in)
  The admin also can pick any other way he prefers to make /run/qemu not have noexec
- define a /etc/.. place to enable this feature, and otherwise have the postrm not even
  copy the old bits.

[1]:

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

[1]: is debian-devel of this morning which seems ot have no public logs :-/

Not sure if I'm supposed to post them then

Revision history for this message
Dan Streetman (ddstreet) wrote :

This sounds very complicated. Are you *sure* using a simple tmpfiles.d approach wouldn't be better?

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi Dan,
the opt-in we want anyway - nothing of that is tied to the question of .mount vs tmpfiles

We had identified some drawback in tmpfiles that made us chose .mount, but maybe that changed given the recent discussions/insights. I'll re-read out old discussion why we dis-qualified tmpfiles and if it is - nowadays - the better option I'll reconsider this.

P.S. also part of the complexity that is flowing by here is the lack of many existing examples for a .mount unit, so there was a bit of experimentation needed. The final result isn't as complex.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

@Dan - from the discussion we had the outcome was that tmpfiles can only create directories and set ownership. At the same time the path is set (per upstream agreement cross distros) and also due to apparmor confinement no symlink magic will help. But the issue we ahve here is that we need to have /run/qemu to be NOT noexec which /run in many cases is by default.
I haven't seen any comeback of a tmpfiles solution as those limitations were not overcome.

If you strip out all the trial and error I had on this bug then it is just:
1. Victor told me we need "exec", he is right
2. Discussion with more developers showed that this feature, although nice - should
   really not be default enabled (but we are fine to make it a comfortable opt-in).
3. I'm prepping a change that fulfills
   #1 with a .mount unit
   #2 with a config file and the .mount being default disabled

The suggested config file would be:
/etc/default/qemu-block-extra-upgrade-backup

The files there usually are == package name, but this is a very special case so just naming it qemu-block-extra seems wrong. Starting with the package name, but having a suffix is what I'd go for until review happens.

Revision history for this message
Dan Streetman (ddstreet) wrote :

> the opt-in we want anyway

major nak from me. this can't be opt-in. Can you explain the concern?

> from the discussion we had the outcome was that tmpfiles can only create directories
> and set ownership

that's not true

> apparmor confinement no symlink magic will help

I'm not thinking of symlinks, and any apparmor changes needed would be the same for /run storage or tmpfiles approach

Revision history for this message
Dan Streetman (ddstreet) wrote :

Another completely different alternative approach might be for us to see if upstream qemu is willing to simply open all the module files when qemu starts, and leave the fd open until exit.

That way even if the module files are removed, any still-running qemu process(es) would still have an open fd to them and (at least on UNIX systems) should be able to load them, since the kernel won't actually fully remove them until all open descriptors are closed.

I haven't tested that and I'm not sure if there are possible issues with mmaping removed files, but in theory it should work.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote : Re: [Bug 1913421] Re: Load of pre-upgrade qemu modules needs to avoid noexec

On Thu, Feb 11, 2021 at 2:45 PM Dan Streetman
<email address hidden> wrote:
>
> Another completely different alternative approach might be for us to see
> if upstream qemu is willing to simply open all the module files when
> qemu starts, and leave the fd open until exit.

We've had that discussion the first time it came up, but that wasn't
an approach anyone likes.
It has too many bad attributes:
- keeping files open that are removed is considered not-good
- bloating the active binary is considered very bad and by mapping all
that would happen
- There are more awkward cases, like starting guests, then installing
qemu-block-extra later and then hot-plugging
   Valid but not working with this approach.

> That way even if the module files are removed, any still-running qemu
> process(es) would still have an open fd to them and (at least on UNIX
> systems) should be able to load them, since the kernel won't actually
> fully remove them until all open descriptors are closed.
>
> I haven't tested that and I'm not sure if there are possible issues with
> mmaping removed files, but in theory it should work.
>
> ** Merge proposal linked:
> https://code.launchpad.net/~ddstreet/ubuntu/+source/qemu/+git/qemu/+merge/397904
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1913421
>
> Title:
> Load of pre-upgrade qemu modules needs to avoid noexec
>
> Status in qemu package in Ubuntu:
> In Progress
> Status in qemu source package in Bionic:
> Confirmed
> Status in qemu source package in Focal:
> Confirmed
> Status in qemu source package in Groovy:
> Confirmed
>
> Bug description:
> This is a continuation of bug 1847361.
>
> Since that is in Ubuntu and Debian we are:
> - correctly saving the modules to those paths in /var/run/qemu.
> - qemu tries to load from that path as fallback
> - that works fine in containers running qemu/kvm
>
> But there is an issue on non-container systems as /run usually is like
> this:
>
> tmpfs on /run type tmpfs
> (rw,nosuid,nodev,noexec,relatime,size=3274920k,mode=755)
>
> The important bit here is the "noexec" which is intentional (for
> security reasons), but prevents the loading of shared objects from
> that path.
>
> The path is good for many reasons (it is auto-cleaned, upstream and
> Distros agreed to this one path, ...). Moving it to other places also
> quite likely might have unpredictable options.
>
> In a discussion between Victor (thanks for all the pushign and inpot
> on this) and Marc (security POV) we have come to a solution that will
> make just the subpath that is owned by qemu to not have noexec set.
>
> This bug shall track preparing this fix for Debian / Ubuntu and the
> latter SRu considerations on the same.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1913421/+subscriptions

--
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd

Revision history for this message
Dan Streetman (ddstreet) wrote :

@paelzer when you sru for this bug, can you include the patches from bug 1887535 and bug 1887823 also? I have MR for them linked from the bugs

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Yes Dan, I'd include those as they'd ride a change that makes the upload qualify for an SRU.
But first we need to agree/complete it and also another set of CVE uploads needs to pass.

Revision history for this message
Markus Schade (lp-markusschade) wrote :

Are there any plans to go forward with any of the above mentioned solutions?
Currently I either have to migrate off all guests before a qemu upgrade or set up a tmpfs mount with exec on /run/qemu.

Happy to help testing.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi Markus,
yeah the intention is to pick one once we agree with Debian which one to not go back&forth too often. That was discussion was stuck for a while but recently unblocked.

Revision history for this message
Launchpad Janitor (janitor) wrote :
Download full text (6.0 KiB)

This bug was fixed in the package qemu - 1:6.0+dfsg-1~ubuntu3

---------------
qemu (1:6.0+dfsg-1~ubuntu3) impish; urgency=medium

  * d/p/u/lp-1935617-target-ppc-Fix-load-endianness-for-lxvwsx-lxvdsx.patch:
    fix TCG emulation for ppc64 (LP: #1935617)

qemu (1:6.0+dfsg-1~ubuntu2) impish; urgency=medium

  * d/control: remove fuse2 trial-build (LP 1934510)

qemu (1:6.0+dfsg-1~ubuntu1) impish; urgency=medium

  * Merge with Debian experimental, Among many other things this fixes LP Bugs:
    (LP: #1907952) broken arrow keys in -display gtk on aarch64
    - qemu-kvm to systemd unit
      - d/qemu-kvm-init: script for QEMU KVM preparation modules, ksm,
        hugepages and architecture specifics
      - d/qemu-system-common.qemu-kvm.service: systemd unit to call
        qemu-kvm-init
      - d/qemu-system-common.install: install helper script
      - d/qemu-system-common.qemu-kvm.default: defaults for
        /etc/default/qemu-kvm
      - d/rules: call dh_installinit and dh_installsystemd for qemu-kvm
    - Distribution specific machine type
      (LP: 1304107 1621042 1776189 1761372 1761372 1776189)
      - d/p/ubuntu/define-ubuntu-machine-types.patch: define distro machine
        types containing release versioned machine attributes
      - d/qemu-system-x86.NEWS Info on fixed machine type defintions
        for host-phys-bits=true
      - Add an info about -hpb machine type in debian/qemu-system-x86.NEWS
      - ubuntu-q35 alias added to auto-select the most recent q35 ubuntu type
    - Enable nesting by default
      - d/p/ubuntu/enable-svm-by-default.patch: Enable nested svm by default
        in qemu64 on amd
        [ No more strictly needed, but required for backward compatibility ]
    - improved dependencies
      - Make qemu-system-common depend on qemu-block-extra
      - Make qemu-utils depend on qemu-block-extra
      - Let qemu-utils recommend sharutils
    - tolerate ipxe size change on migrations to >=18.04 (LP: 1713490)
      - d/p/ubuntu/pre-bionic-256k-ipxe-efi-roms.patch: old machine types
        reference 256k path
      - d/control-in: depend on ipxe-qemu-256k-compat-efi-roms to be able to
        handle incoming migrations from former releases.
    - d/control-in: Disable capstone disassembler library support (universe)
    - d/qemu-system-x86.README.Debian: add info about updated nesting changes
    - d/control*, d/rules: disable xen by default, but provide universe
      package qemu-system-x86-xen as alternative
      [includes compat links changes of 5.0-5ubuntu4]
    - Fix upgrade module handling (LP 1905377)
      --enable-module-upgrades for qemu-xen which doesn't exist in Debian
  * Dropped Changes [in 6.0]:
    - d/p/ubuntu/lp-1907789-build-no-pie-is-no-functional-liker-flag.patch: fix
      ld usage of -no-pie (LP 1907789)
    - d/p/u/lp-1916230-hw-s390x-fix-build-for-virtio-9p-ccw.patch: fix
      virtio-9p-ccw being missing (LP 1916230)
    - d/p/u/lp-1916705-disas-Fix-build-with-glib2.0-2.67.3.patch: Fix FTFBS due
      to glib2.0 >=2.67.3 (LP 1916705)
    - d/p/u/lp-1921754*: add EPYC-Rome-v2 as v1 missed IBRS and thereby fails
      on some HW/Guest combinations e.g. Windows 10 on Threadripper chips
  ...

Read more...

Changed in qemu (Ubuntu):
status: In Progress → Fix Released
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Finally - Now that we have settled on a way how to resolve the noexec issue and have it in 21.10 we consider how to SRU this. Either as an opt-in or default-on feature.
Both approaches have arguments that speak for them, but first we need to prep the changes which can have various slight behavior issues due to compat-level being different.

Next steps:
1. backports
2. test behavior per release
3. open discussion with the SRU team what the default should be
  3b. If the discussion ends with default off, do we even need to SRU it then?
4. Testing by more than just me based on the PPA
5. Trigger the real SRU

Warning: Sprint, +1 Maintenance and some PTO ahead - this might take a bit still.

Changed in qemu (Ubuntu Hirsute):
status: New → Triaged
Changed in qemu (Ubuntu Groovy):
status: Confirmed → Triaged
Changed in qemu (Ubuntu Focal):
status: Confirmed → Triaged
Changed in qemu (Ubuntu Bionic):
status: Confirmed → Triaged
Revision history for this message
Dan Streetman (ddstreet) wrote :

> Either as an opt-in or default-on feature.
> Both approaches have arguments that speak for them

sorry, could you actually list those arguments? Just for clarification

> 3. open discussion with the SRU team what the default should be
> 3b. If the discussion ends with default off, do we even need to SRU it then?

yes, without question.

bootstack as well as virtually all people using SRU releases to manage a cloud will need to enable this, if you choose to make it default off. Not having the choice at all doesn't seem like it actually fixes the problem for anyone on SRU releases. Of course as you know my opinion is cloud admins shouldn't need to know about needing to 'enable' this fix.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

>> Either as an opt-in or default-on feature.
>> Both approaches have arguments that speak for them

> sorry, could you actually list those arguments? Just for clarification

To be clear I'm on the "let us enable it" side of things.
But I see the two SRU questions:
a) but that changes behavior of the system with another mount point being around by default
b) but what if people have come up making it no-exec themselves, won't it collide

> 3. open discussion with the SRU team what the default should be
> 3b. If the discussion ends with default off, do we even need to SRU it then?

> yes, without question.

> bootstack as well as virtually all people using SRU releases to manage a cloud will need to
> enable this, if you choose to make it default off. Not having the choice at all doesn't seem
> like it actually fixes the problem for anyone on SRU releases. Of course as you know my opinion
> is cloud admins shouldn't need to know about needing to 'enable' this fix.

Thanks, so you think even if we can't convince the SRU team of default-on then having the mount unit would be better than having nothing and letting everyone deal with it on their own then - right?

Revision history for this message
Brian Murray (brian-murray) wrote :

The Groovy Gorilla has reached end of life, so this bug will not be fixed for that release

Changed in qemu (Ubuntu Groovy):
status: Triaged → Won't Fix
Revision history for this message
Dan Streetman (ddstreet) wrote :

> Thanks, so you think even if we can't convince the SRU team of default-on then having the mount unit would be better than having nothing and letting everyone deal with it on their own then - right?

i think we really need to figure out some way to actually ensure this is fixed for older releases without manual intervention, even if that's different from how it's fixed in debian and/or the devel release. This is a real problem for real users, and we shouldn't expect them to hit the problem and then do their own research to figure out how to work around it on their own.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

With this going into Impish and that now being in feature freeze without any panic-bugs on this for a while I've started to backport the mount unit, and established a test plan which is around this.

(most reasonable in monospace)

                          Bionic Focal Hirsute Impish
installs fine
status after install
removes fine
status after remove
reinstalls fine
status after reinstall
upgrades fine
status after upgrade
purges fine
status after purge
---
Load old modules

Let me know if there is more to pre-check in your opinion.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Oh and yes, thinking about it more time while backporting I think having the mount unit default enabled (and therefore the post upgrade module loads working by default) is the right way to go. So ddstreet and I agree on this.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Iteration I:
- Bionic's and Focal's systemd do not know "ReadWriteOnly".
  That is not super-important to have, dropped from those backports
  (done)
- default mount options slightly differ >=Hirsute vs <=Focal, but nothing
  that affects the use-case
  (no action needed)
- Removing qemu-block-extra >=Hirsute has dependency issues due to -gui
  I verified that is not due to anything we change hereby
  (no action needed)
- F/B stopped the mount unit (but we want to keep it for qemu's that are around)
  (needs debugging)
- F/B had no modules saved on --reinstall
  (needs debugging)

Added test step after each action:
- also check the set of saved modules present after every action

TODO before iteration II:
 - F/B stopped the mount unit
 - F/B had no modules saved on --reinstall

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Updates already added for a new iteration:
- Fixed the version comparisons in the postinst (worked, but was wrong and could cause issues in later cross upgrades)
- Fixed the missing negation in the check if the target is no-exec (prevented saving of modules)

Info:
- the F/B postrm does purge service on purge (ok)
- the F/B postrm does mask the service on remove (ok - no conflict as it will only save the state)

Current remaning issue is the stopping on remove.
That is due to prerm - in Bionic by dh_*_start/-_enable handling by dh_systemd_start/11.1.6ubuntu2 and Focal by dh_installsystemd/12.10ubuntu1.
They stop the mount unit in prerm like:

# Automatically added by dh_systemd_start/11.1.6ubuntu2
if [ -d /run/systemd/system ] && [ "$1" = remove ]; then
        deb-systemd-invoke stop 'run-qemu.mount' >/dev/null || true
fi
# End automatically added section

The old dh tools did neither detect a mount unit as such and handled it differently to other services, nor did it provide a means to no-stop-on-remove. I'm afraid I need to convert the generated snippets to explicit sections in the postinst files.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

The stop issue should be Fixed as well.
New iteration building in PPA.

BTW the PPA is at:
  https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/4653

Iteration #2 findings that need to be addressed:
- Bionic doesn't enable/start the mount unit (I'll do that manually for further tests)
- Bionic has a leading and trailing _ in the generated names "/run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.38~bionicppa2_"
- Focal modules do not end up in /run/qemu
  Debugging shows that mkdir/cp are executed, but have no effect?!

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Fixes for iteration #3:
- the leading and trailing _ was for Bionic as 2.11 stored the version differently (added in v.25), since we will test if it actually loads old modules we will have an extra check if it works that way once all other things are resolved.
- Fixed Bionic start (missed postinst; Focal finds and acts on .mount file ind debian/* so it doesn't need the same)
- Focal still has the "deb-systemd-invoke stop" section even without us calling dh_*systemd
  in d/rules. It automatically picks up the unit and decides that it is good to act on it :-/
  Now trying --no-start
- Focal did actually copy the content as planned, but afterwards prerm stopped the mount
  unit and then a new start had new empty content. Hopefully the --no-enable --no-start will
  make dh leave it alone for our handling as intended

description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

I copied over the test case from the last SRU, but for Bionic that actually needs to be modified as it fails even with the original module being around.

description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

From debugging it seems Bionic didn't have CONFIG_MODULE_UPGRADES set correctly.
(gdb) p dirs
$1 = {0x0, 0x0, 0x0}
Is the latter of:
  168 #ifdef CONFIG_MODULE_UPGRADES
  169 char *version_dir;
  170 char *dirs[4];
  171 #else
  172 char *dirs[3];
  173 #endif

It turned out that on rebase the change "d/rules: --enable-module-upgrades not needed for qemu-system-x86-xen" silently auto-applied to disable the option on the main qemu build (the -xen pkg didn't exist back then).
=> Fixed

Building for the next test iteration now ...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Bionic is now properly enabled - loading works

With original file available:
7fd0bd2d1000-7fd0bd2d5000 r-xp 00000000 fc:01 258155 /usr/lib/x86_64-linux-gnu/qemu/block-curl.so

Without it available:
7f5ba74d1000-7f5ba74d5000 r-xp 00000000 00:32 8 /run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.38~bionicppa3_/block-curl.so

I think Bionic is ready to enter the full test set again.

Focal still isn't good - the mount unit still is stopped/restarted and I need to find where/why.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Focals dh* tools seemd to be rather unimpressed whatever we do for for -name=run-qemu.mount :-/
- not calling dh* for it
- calling dh* with --no-start --no-enable
- calling dh* with --no-restart-on-upgrade
All achieve the same which is NOT what we want.

They all do:
- stop on purge (ok)
- mask on remove (ok)
- start on postinst (ok)
- some sections are doubled as the manual and generated ones match (ok but can be dropped)
- stop the service in prerm (makes this fail overall)

Gladly qemu-block-extra has no other maintscript actions in Focal, so I can make maintainer scripts without #DEBHELPER# to prevent the bad sections from being added.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

The remaining Focal issues stopping the unit on prerm should be resolved now.
The new version is building for the next round of tests.

description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

All three planned SRUs (and Impish to be sure) passed all my intended tests in regard to the maintscript behavior that I outlined above.
Then I was running the actual intended use-case that is outlined as test in the SRU template - works as well.

I'm opening merge proposals for those SRUs and would ask one of the Team (as usual) and ddstreet (since he's involved) for an in depth review as I want to avoid to have to re-fix this again and again.

MPs:
B => https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/407765
F => https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/407764
H => https://code.launchpad.net/~paelzer/ubuntu/+source/qemu/+git/qemu/+merge/407766

Changed in qemu (Ubuntu Bionic):
status: Triaged → In Progress
Changed in qemu (Ubuntu Focal):
status: Triaged → In Progress
Changed in qemu (Ubuntu Hirsute):
status: Triaged → In Progress
Revision history for this message
Dan Streetman (ddstreet) wrote :
Download full text (3.7 KiB)

Ok, so I know I should have complained (more) in the Debian bug, earlier. It's been merged already. This is just about backporting.

However, reading the merge requests just makes me...itchy.

First, let me recap what the point of all this is:
When qemu is upgraded from version X to version Y, the loadable block modules for version X need to be saved somewhere, in case an instance already running with version X later needs to load a block module (so it can attach a volume). So we need to save the version X block modules somewhere that existing instances can load from, which means it needs to be some fs that isn't noexec. Also, since we're guaranteed that all instances running version X will be gone after a reboot, we want the place we save version X block modules to be transient - it should disappear automatically after a reboot.

That's it. We don't need anything else.

One proposed method was, at time of uninstalling version X, to mkdir a new directory in the normal filesystem to store the version X block modules, and to add a generic tmpfiles.d config that would clean out this temporary-storage location on each boot. That was dismissed in the Debian bug due to the Debian qemu maintainer using a completely read-only root filesystem, meaning the version X modules wouldn't be cleaned up on boot.

The previous approach was to mkdir a new subdirectory under /run and put the version X modules there. The problem with that is /run is typically mounted with noexec, so qemu isn't able to load executable modules located under /run.

This approach now adds an entirely new systemd mount unit to mount /run/qemu without noexec, and then store version X modules there. This new mount unit must be managed in the maintainer scripts, including installing it, reloading systemd, enabling it, starting it, and leaving it mounted all the time, for each boot.

Personally, I find all this more than overcomplicated. Let's remember what we need to do - save a few files onto a filesystem that isn't mounted noexec, and that will be gone after a reboot. And that needs to happen *only* when the qemu package is removed or upgraded, not all the time. Even adding the additional directory to load modules from and the tmpfiles.d conf felt like it was overcomplicating things to me; having to add an entirely new mount unit and make sure it's properly running after install and also after boot...

I really don't want to *further* overcomplicate all this, but at the same time I really don't feel like i can give my +1 to the merge requests. Personally, I wouldn't have done it that way in Debian. If we don't want to 'mkdir -p /usr/lib/@ARCH@/qemu/VERSION_X/ ; cp /usr/lib/@ARCH@/qemu/block*.so /usr/lib/@ARCH@/qemu/VERSION_X/' in the postinst script (along with the tmpfiles.d conf to remove them at boot), because some users might leave their root filesystems read-only except during package upgrades, then fine, let's just 'mkdir -p /run/qemu/VERSION_X/ ; cp /usr/lib/@ARCH@/qemu/block*.so /run/qemu/VERSION_X/' instead (which avoids the need for any tmpfiles.d conf) and throw in a check for 'noexec' in the postinst and actually do a quick manual tmpfs mount without noexec at /run/qemu (or...

Read more...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi Dan,

> Ok, so I know I should have complained (more) in the Debian bug, earlier.
> It's been merged already. This is just about backporting.
...
> Maybe I'm just making the backporting more difficult by complaining about it
> being more complex than should be needed for the relatively simple operation
> we need to do.

Please do not be concerned, we do this because we want to have a discussion
right. I appreciate any feedback, even if I might be of other opinion.
If not I could have uploaded things right away :-)

> First, let me recap what the point of all this is
> ...

I agree to the general problem statement, but many of the derived assumptions
are unfortunately not that easy :-/. Let me explain.

The solution we have right now in /run/qemu is not out of thin air:
- that path came up as best place in a security review
- and it was upstream discussed and agreed
- and after that agreement the apparmor isolation was adapted
  to allow that as the paths to do so are rather restrictive
- the mount unit now solves the issue of noexec and at
  the same time provides an opt-out mechanism

To help everyone getting more context of the past discussions, let me add a
few fragments of those:
 1. maintainer scripts should not add files in /usr/lib that are not listed
    as owned by the PKG (conffiles are fine and files a dpkg -L will list
    are fine, but others are discouraged)
 2. If we change the path to something else now we loose the worth of the
    upstream review, maybe another path has other drawbacks we just do not
    yet know
 3. any path other than the original one in /run/qemu/$ver will need to also
    modify libvirt and essentially introduce a versioned dependency. That
    is messy and to be avoided as wel.

> ...
> quick manual tmpfs mount without noexec at /run/qemu
> ...

While I admit that a mount unit is complexity, the suggestion of a manual tmpfs
is OTOH very intransparent and doing so is discouraged.

Then also SRU != new:

Finally we have shipped Variant #1 (minus the now added extension to handle
the noexec) already. Therefore in the SRU spirit we must retain that behavior
as an admin could e.g. handled himself that /run/qemu isn't noexec and now
relies on it. Not saving it there anymore would break them.
And if /run/qemu has to stay we can as well just fix it via the mount unit.

I hope I was able to outline why the mount unit approach - despite its initial
appearance - is actually the less complex one for the current situation.
I knew and can understand that you like the tmpfiles.d approach more, which is
why I haven't killed it in discussions as it is a "fair contender"
The .mount approach does:
 - retain the module save path (for admins that use it already)
 - does not imply another SRU to libvirt for apparmor
 - sticks to the upstream agreed and security reviewed paths
 - does not violate putting files in /usr/lib not owned by .deb's

Revision history for this message
Dan Streetman (ddstreet) wrote :

> I knew and can understand that you like the tmpfiles.d approach more

to clarify, that isn't the approach i suggested in comment 41

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

>> I knew and can understand that you like the tmpfiles.d approach more

> to clarify, that isn't the approach i suggested in comment 41

Indeed, but I thought my full answer also covered why:
 "... throw in a check for 'noexec' in the postinst and actually do a quick manual tmpfs
 mount without noexec at /run/qemu (or some subdir) if needed ..."
also isn't an approach that seem applicable.

As I explained in the discussions I had it came up that it lacks the transparency a user usually expects.
- Why is this MP there but I can't find it in systemd where I find everything else?
- What about error propagation, the mount unit is an entity everyone knows how to handle
  but in the prerm any errors will just be washed away on updates (we can't make them fatal
  as breaking updates isn't nice either)

I mean I admire the simplicity (especially since - as my backports show - mount unit handling in dh* tools differ from release to release) and if a 3rd or 4rth party review turns out to tell me I'm the only one thinking "create tmpfs in prerm is bad/unwanted" then I'm not even against it and would be happy to rewrite the MPs.

I need to re-ping the SRU Team (for an SRU opinion pre-review before we hit -unapproved).

Revision history for this message
Markus Schade (lp-markusschade) wrote :

As this has bitten us more than once in the past, we had to find a solution that would work for us while this issue was discussed. Adding a mount unit without noexec for /run/qemu was the obvious and most straightforward solution. We have on average at least one qemu update between reboots, so having a reliable and trackable mount unit would be the preferred solution.
But I do acknowledge that the DH acrobatics are a bit ugly. :)

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Thank you for chiming in Markus - in cases like this it mostly is better to have more POVs. I'm waiting for a pre-review by the SRU team now as they will have the final decision in this anyway.

Revision history for this message
Robie Basak (racb) wrote :
Download full text (3.4 KiB)

I discussed this in realtime with Christian earlier, from an SRU review perspective.

My opinion on what we should do follows. I'd like to be clear though that this isn't necessarily a final decision. If there's a flaw in this plan, please point it out so that we can reconsider. And if there are improvements to be made or aspects we haven't considered, then I'd appreciate feedback explaining that too. But if nobody else objects, then if you implement my description of Option B below then I will accept it on behalf of the SRU team without asking you to change the approach taken again.

Common ground I think we're all generally happy with: 1. The general mechanism to use /run/qemu after a package upgrade to temporarily store executable stuff until the next reboot that qemu will look at to load when required. 2. [Maybe not Dan's preference though, but out of scope for this SRU review] The current state of Debian unstable and Impish: using /run/qemu with a shipped systemd mount unit to mount it noexec.

What I think is in question is what we do for SRUs in Ubuntu. Option A: we add the mount units to the stable releases. An alternative is Option B: Dan's suggestion to just mount /run/qemu manually in the postinst on upgrade.

To expand on Option B, I suggest that it be done with "mkdir -p /run/qemu" and then a test to see if /run/qemu/ is exec (eg. write "#!/bin/sh\nexit 0\n" inside it and see if it will run and exit 0). If the test succeeds, then do nothing. Otherwise, mount /run/qemu -o exec -t tmpfs || true.

Option A has the disadvantage that it might collide with what users might have already done to work around the issue, such as adding their own mount unit, and this may require manual intervention from those users. However, if done this way the result will be clean, discoverable, debuggable using the usual systemd means that administrators are used to, and the same as what is in Debian and in the development release already.

Option B has the advantage that it should automatically fit in with any workarounds users might have implemented already with no intervention required. It's also really minimal, so carries very little regression risk - for example there's no concern about whether debhelper/maintscripts are doing the right thing wrt. upgrade failures, existing mount units, etc. With Option A I'm concerned that there's some corner case in maintainer script handling that might regress existing users. It's harder to ensure that this isn't the case compared to Option B.

Additionally, Option B could write to /run/qemu/README with a quick explanation and link to further details to mitigate the concern that users will not understand what is going on. I suggest keeping this text minimal since it's hard to change in an SRU, and put the details in the link destination (wiki or bug reference or whatever) so that it can be updated as needed.

For an SRU, my current opinion is to favour Option B, because that is most easily verifiable against edge cases. I favour testing for exec by actually trying to exec, because that should also reduce the possibility that the test is wrong somehow.

Option B does mean that the fix in the stable releases will no...

Read more...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Thanks for summarizing the outcome of our discussion and "alternative B detail brainstorming". And also thanks for being our tie-breaker on this - I appreciate that and will go through revamping the MPs and retests.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Oh and as Robie expressed in the discussion, he likes the "test the fact" better than the "look for the config" approach for the SRUs.
So do not be puzzled by the former snippets of "findmnt --noheadings --target /run/qemu/ | grep -vq noexec" being replaced by the new "run from path" approach.

description: updated
Revision history for this message
Robie Basak (racb) wrote :

One thought. From a quick test, it looks like I can mount a directory tmpfs over and over again, building up the list of mounts. So if there's some other reason the script can't execute in /run/qemu, every time the postinst runs, we'd mount over it again. So maybe we should additionally not mount if /run/qemu is already a mount point, based on findmnt?

So I suggest:

Testing if we can execute ("test the fact"), AND checking that /run/qemu isn't already mounted, to stop us doing it multiple times, and only then manually adding the mount.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

I have implemented the SRU now as discussed.

Retest LGTM
1. install (no files, no mount)
2. upgrade/reinstall (modules saved, mount)
3. upgrade/reinstall (modules saved, mount - no double mount or errors)
4. moving files away still late loadable
5. on remove current modules stay behind, the rest stays
6. on purge all is gone

Behavior wise this was mostly ok already, here the items I found:

My tests identified a weakness that actually would have happened before as well but was somewhat masked. That is if a guest is running that uses the fallback mechanism we have:
  umount: /run/qemu: target is busy.
  rm: cannot remove '/run/qemu': Device or resource busy
I think in those cases we just want it to stay silent, on a reboot it will be gone and we do NOT want to kill the guests that stay up.

Furthermore Bionic has not yet --quiet
  umount: unrecognized option '--quiet'
Easy to drop

And there is one imperfection left which is the test on the noexec reporting this on upgrade.
  /var/lib/dpkg/info/qemu-block-extra:amd64.prerm: 29: /var/run/qemu/exec: Permission denied
I think that is just a matter of output redirection and will add that.
But it postpones the refresh of the MR by another rebuild cycle.

New builds uploaded to the PPA.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Retest
1. install (no files, no mount)
2. upgrade/reinstall (modules saved, mount)
3. upgrade/reinstall (modules saved, mount - no double mount or errors)
4. moving files away still late loadable into running guest
5. on remove current modules are removed, others might stay behind
6. purge while guests run - do not fail (mount up [busy], files ld-loaded but deleted)
7. purge after guests are gone (files and mount gone)

All worked fine now, and without the mount unit some extra complexity of the test interactions is gone. Pushing to the MPs and pinging there ...

I tested cases like the one of Markus where admins have themselve taken care of exec in the path, there we do NOT create an additional MP and just use what is there already.

description: updated
Changed in qemu (Ubuntu Bionic):
assignee: nobody → Christian Ehrhardt  (paelzer)
Changed in qemu (Ubuntu Focal):
assignee: nobody → Christian Ehrhardt  (paelzer)
Changed in qemu (Ubuntu Hirsute):
assignee: nobody → Christian Ehrhardt  (paelzer)
description: updated
description: updated
description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

FYI Retests with all the feedback by SRU and my Team are in the code and PPAs right now.
All looks good for the initial tests, I need to prep the environment to check if QEMU_MODULE_DIR works fine, if it does it should be ready to go to -unapproved.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

An update on test #2 adding it to the description.

This sub-test is only in Focal and Hirsute, was not present in Bionic.
This should have preference over the other dirs (usual load as well as fallback path), therefore we do NOT remove the usual paths and check if it works, we keep them but check the loaded binary.
TL;DR Copy the .so to another place and load it from there:
 $ sudo cp /usr/lib/x86_64-linux-gnu/qemu/block-curl.so /tmp/
 $ QEMU_MODULE_DIR="/tmp/" qemu-system-x86_64 -nographic -cdrom https://cdimage.ubuntu.com/ubuntu-server/daily-live/current/impish-live-server-amd64.is
 # Then in other console check if it loaded that
 $ $ sudo cat /proc/$(pidof qemu-system-x86_64)/maps | grep curl | grep r-xp
7f3ef7dc1000-7f3ef7e23000 r-xp 0000c000 fc:01 5481 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.6.0
7f3efa729000-7f3efa72c000 r-xp 00002000 fc:01 1086 /tmp/block-curl.so

We see the qemu block lib from the wanted path and other libs from the system as usual.

description: updated
description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Ok, this finally should have been through enough verification, reviews and discussions.
Ready for consideration by the SRU Team and thereby uploaded to Bionic-,Focal- and Hirsute-unapproved.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

We found a further small issue in review and I fixed it up.
Thanks Robie for spotting them!

Test:
In normal/default environments after an upgrade:

$ find /var/run/qemu/; findmnt -T /var/run/qemu
/var/run/qemu/
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-ssh.so
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-rbd.so
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-iscsi.so
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-curl.so
/var/run/qemu/README
TARGET SOURCE FSTYPE OPTIONS
/run/qemu none tmpfs rw,nosuid,nodev,relatime,mode=755,inode64

In an environment where exec in /var/run/qemu works after an upgrade it:

$ sudo mount -o remount,exec /run
$ findmnt -T /var/run
TARGET SOURCE FSTYPE OPTIONS
/run tmpfs tmpfs rw,nosuid,nodev,relatime,size=202588k,mode=755,inode64

Then upgrade

$ find /var/run/qemu/; findmnt -T /var/run/qemu
/var/run/qemu/
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-ssh.so
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-rbd.so
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-iscsi.so
/var/run/qemu/Debian_1_5.2+dfsg-9ubuntu3.2~hirsuteppa7/block-curl.so
/var/run/qemu/README
/var/run/qemu/exec
TARGET SOURCE FSTYPE OPTIONS
/run tmpfs tmpfs rw,nosuid,nodev,relatime,size=202588k,mode=755,inode64

All other things e.g. no mount stacking are still valid.

---

To sum this up: New builds in the PPA tested fine and I uploaded to -unapproved (again).

Revision history for this message
Robie Basak (racb) wrote : Please test proposed package

Hello Christian, or anyone else affected,

Accepted qemu into hirsute-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/qemu/1:5.2+dfsg-9ubuntu3.2 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-hirsute to verification-done-hirsute. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-hirsute. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in qemu (Ubuntu Hirsute):
status: In Progress → Fix Committed
tags: added: verification-needed verification-needed-hirsute
Changed in qemu (Ubuntu Focal):
status: In Progress → Fix Committed
tags: added: verification-needed-focal
Revision history for this message
Robie Basak (racb) wrote :

Hello Christian, or anyone else affected,

Accepted qemu into focal-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/qemu/1:4.2-3ubuntu6.18 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-focal to verification-done-focal. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-focal. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

Changed in qemu (Ubuntu Bionic):
status: In Progress → Fix Committed
tags: added: verification-needed-bionic
Revision history for this message
Robie Basak (racb) wrote :

Hello Christian, or anyone else affected,

Accepted qemu into bionic-proposed. The package will build now and be available at https://launchpad.net/ubuntu/+source/qemu/1:2.11+dfsg-1ubuntu7.38 in a few hours, and then in the -proposed repository.

Please help us by testing this new package. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how to enable and use -proposed. Your feedback will aid us getting this update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, what testing has been performed on the package and change the tag from verification-needed-bionic to verification-done-bionic. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-failed-bionic. In either case, without details of your testing we will not be able to proceed.

Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance for helping!

N.B. The updated package will be released to -updates after the bug(s) fixed by this package have been verified and the package has been in -proposed for a minimum of 7 days.

description: updated
Revision history for this message
Ubuntu SRU Bot (ubuntu-sru-bot) wrote : Autopkgtest regression report (qemu/1:2.11+dfsg-1ubuntu7.38)

All autopkgtests for the newly accepted qemu (1:2.11+dfsg-1ubuntu7.38) for bionic have finished running.
The following regressions have been reported in tests triggered by the package:

cloud-utils/0.30-0ubuntu5 (i386)

Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUpdates policy regarding autopkgtest regressions [1].

https://people.canonical.com/~ubuntu-archive/proposed-migration/bionic/update_excuses.html#qemu

[1] https://wiki.ubuntu.com/StableReleaseUpdates#Autopkgtest_Regressions

Thank you!

Revision history for this message
Ubuntu SRU Bot (ubuntu-sru-bot) wrote : Autopkgtest regression report (qemu/1:4.2-3ubuntu6.18)

All autopkgtests for the newly accepted qemu (1:4.2-3ubuntu6.18) for focal have finished running.
The following regressions have been reported in tests triggered by the package:

edk2/0~20191122.bd85bf54-2ubuntu3.3 (amd64)

Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUpdates policy regarding autopkgtest regressions [1].

https://people.canonical.com/~ubuntu-archive/proposed-migration/focal/update_excuses.html#qemu

[1] https://wiki.ubuntu.com/StableReleaseUpdates#Autopkgtest_Regressions

Thank you!

Revision history for this message
Ubuntu SRU Bot (ubuntu-sru-bot) wrote : Autopkgtest regression report (qemu/1:5.2+dfsg-9ubuntu3.2)

All autopkgtests for the newly accepted qemu (1:5.2+dfsg-9ubuntu3.2) for hirsute have finished running.
The following regressions have been reported in tests triggered by the package:

ubuntu-image/1.11+21.04ubuntu2 (amd64)
casper/1.461 (amd64)

Please visit the excuses page listed below and investigate the failures, proceeding afterwards as per the StableReleaseUpdates policy regarding autopkgtest regressions [1].

https://people.canonical.com/~ubuntu-archive/proposed-migration/hirsute/update_excuses.html#qemu

[1] https://wiki.ubuntu.com/StableReleaseUpdates#Autopkgtest_Regressions

Thank you!

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

FYI the test issues were just flaky tests and are resolved by now.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :
Download full text (37.4 KiB)

Testing - Upgrade from the former version

Bionic
ubuntu@qemu-module-bionic:~$ sudo apt update; sudo apt upgrade -y
Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-proposed InRelease [242 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 Packages [104 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-proposed/main Translation-en [26.6 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-proposed/restricted amd64 Packages [57.0 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic-proposed/restricted Translation-en [9756 B]
Get:10 http://archive.ubuntu.com/ubuntu bionic-proposed/universe amd64 Packages [13.2 kB]
Get:11 http://archive.ubuntu.com/ubuntu bionic-proposed/universe Translation-en [8404 B]
Fetched 625 kB in 1s (531 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
25 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
  linux-headers-4.15.0-160 linux-headers-4.15.0-160-generic linux-image-4.15.0-160-generic linux-modules-4.15.0-160-generic
The following packages will be upgraded:
  libnetplan0 libparted2 libx11-6 libx11-data linux-base linux-headers-generic linux-headers-virtual linux-image-virtual linux-virtual login netplan.io nplan parted passwd
  python3-software-properties python3-update-manager qemu-block-extra qemu-kvm qemu-system-common qemu-system-x86 qemu-utils secureboot-db software-properties-common uidmap
  update-manager-core
25 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 42.8 MB of archives.
After this operation, 168 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 login amd64 1:4.5-1ubuntu2.1 [307 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 qemu-utils amd64 1:2.11+dfsg-1ubuntu7.38 [869 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 qemu-system-common amd64 1:2.11+dfsg-1ubuntu7.38 [672 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 qemu-block-extra amd64 1:2.11+dfsg-1ubuntu7.38 [41.8 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 passwd amd64 1:4.5-1ubuntu2.1 [819 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 linux-base all 4.5ubuntu1.7 [17.9 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 libnetplan0 amd64 0.99-0ubuntu3~18.04.5 [22.6 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 netplan.io amd64 0.99-0ubuntu3~18.04.5 [71.1 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 nplan all 0.99-0ubuntu3~18.04.5 [1800 B]
Get:10 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 parted amd64 3.2-20ubuntu0.3 [4...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :
Download full text (6.3 KiB)

Test I

Bionic
ubuntu@qemu-module-bionic:~$ virsh start lateload
Domain lateload started

ubuntu@qemu-module-bionic:~$ sudo mv /usr/lib/x86_64-linux-gnu/qemu/block-curl.so /root/block-curl.so.notherightplace
ubuntu@qemu-module-bionic:~$ virsh attach-device lateload curldisk.xml
Device attached successfully

ubuntu@qemu-module-bionic:~$ sudo cat /proc/$(pidof qemu-system-x86_64)/maps | grep curl
7f79ee774000-7f79ee7f0000 r-xp 00000000 fc:01 6445 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
7f79ee7f0000-7f79ee9f0000 ---p 0007c000 fc:01 6445 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
7f79ee9f0000-7f79ee9f3000 r--p 0007c000 fc:01 6445 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
7f79ee9f3000-7f79ee9f4000 rw-p 0007f000 fc:01 6445 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
7f79ee9f4000-7f79ee9f8000 r-xp 00000000 00:33 14 /run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.38_/block-curl.so
7f79ee9f8000-7f79eebf8000 ---p 00004000 00:33 14 /run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.38_/block-curl.so
7f79eebf8000-7f79eebf9000 r--p 00004000 00:33 14 /run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.38_/block-curl.so
7f79eebf9000-7f79eebfa000 rw-p 00005000 00:33 14 /run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.38_/block-curl.so

Focal
ubuntu@qemu-module-focal:~$ virsh start lateload
Domain lateload started

ubuntu@qemu-module-focal:~$ sudo mv /usr/lib/x86_64-linux-gnu/qemu/block-curl.so /root/block-curl.so.notherightplace
ubuntu@qemu-module-focal:~$ virsh attach-device lateload curldisk.xml
Device attached successfully

ubuntu@qemu-module-focal:~$ sudo cat /proc/$(pidof qemu-system-x86_64)/maps | grep curl
7f7e00243000-7f7e0024f000 r--p 00000000 fc:01 5481 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.6.0
7f7e0024f000-7f7e002b1000 r-xp 0000c000 fc:01 5481 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.6.0
7f7e002b1000-7f7e002cc000 r--p 0006e000 fc:01 5481 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.6.0
7f7e002cc000-7f7e002cd000 ---p 00089000 fc:01 5481 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.6.0
7f7e002cd000-7f7e002d1000 r--p 00089000 fc:01 5481 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.6.0
7f7e002d1000-7f7e002d2000 rw-p 0008d000 fc:01 5481 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.6.0
7f7e016a4000-7f7e016a6000 r--p 00000000 00:34 18 /run/qemu/Debian_1_4.2-3ubuntu6.18/block-curl.so
7f7e016a6000-7f7e016a9000 r-xp 00002000 00:34 18 /run/qemu/Debian_1_4.2-3ubuntu6.18/block-curl.so
7f7e016a9000-7f7e016aa000 r--p 00005000 00:34 18 /run/qemu/Debian_1_4.2-3ubuntu6.18/block-curl.so
7f7e016aa000-7f7e016ab000 ---p 00006000 00:34 18 /run/qemu/Debian_1_4.2-3ubuntu6.18/block-curl.so
7f7e016ab000-7f7e016ac000 r--p 00006000 00:34 18 /run/qemu/Debian_1_4.2-3ubuntu6.18/block-curl.so
7f7e016ac000-7f7e016ad000 rw-p 00007000 00:34 18 ...

Read more...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Test II

Bionic
- does not apply

Focal
ubuntu@qemu-module-focal:~$ QEMU_MODULE_DIR="/tmp/" qemu-system-x86_64 -nographic -cdrom https://cdimage.ubuntu.com/ubuntu-server/daily-live/current/impish-live-server-amd64.iso &
[1] 41811
ubuntu@qemu-module-focal:~$ sudo cat /proc/$(pidof qemu-system-x86_64)/maps | grep curl | grep r-xp
7ff36a966000-7ff36a9c8000 r-xp 0000c000 fc:01 5481 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.6.0
7ff36d2ce000-7ff36d2d1000 r-xp 00002000 fc:01 1086 /tmp/block-curl.so

Hirsute
ubuntu@qemu-module-bionic:~$ sudo cat /proc/$(pidof qemu-system-x86_64)/maps | grep curl | grep r-xp
7fe8e2fae000-7fe8e302a000 r-xp 00000000 fc:01 6445 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
7fe8e322e000-7fe8e3232000 r-xp 00000000 fc:01 284910 /usr/lib/x86_64-linux-gnu/qemu/block-curl.so

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :
Download full text (5.0 KiB)

Test III:
no extra MP and no MP stacking if already executable

Bionic
ubuntu@qemu-module-bionic:~$ sudo umount /var/run/qemu/; sudo rm -rf /var/run/qemu; sudo mount -o remount,exec /run
ubuntu@qemu-module-bionic:~$ find /var/run/qemu/; findmnt -T /var/run/qemu
find: ‘/var/run/qemu/’: No such file or directory
ubuntu@qemu-module-bionic:~$ sudo apt install --reinstall qemu-block-extra
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.
Need to get 41.8 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-proposed/main amd64 qemu-block-extra amd64 1:2.11+dfsg-1ubuntu7.38 [41.8 kB]
Fetched 41.8 kB in 0s (138 kB/s)
(Reading database ... 125824 files and directories currently installed.)
Preparing to unpack .../qemu-block-extra_1%3a2.11+dfsg-1ubuntu7.38_amd64.deb ...
Unpacking qemu-block-extra:amd64 (1:2.11+dfsg-1ubuntu7.38) over (1:2.11+dfsg-1ubuntu7.38) ...
Setting up qemu-block-extra:amd64 (1:2.11+dfsg-1ubuntu7.38) ...
ubuntu@qemu-module-bionic:~$ find /var/run/qemu/; findmnt -T /var/run/qemu
/var/run/qemu/
/var/run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.38_
/var/run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.38_/block-rbd.so
/var/run/qemu/_Debian_1_2.11+dfsg-1ubuntu7.38_/block-iscsi.so
/var/run/qemu/README
/var/run/qemu/exec
TARGET SOURCE FSTYPE OPTIONS
/run tmpfs tmpfs rw,nosuid,relatime,size=204068k,mode=755

Focal
ubuntu@qemu-module-focal:~$ sudo umount /var/run/qemu/; sudo rm -rf /var/run/qemu; sudo mount -o remount,exec /run
ubuntu@qemu-module-focal:~$ find /var/run/qemu/; findmnt -T /var/run/qemu
find: ‘/var/run/qemu/’: No such file or directory
ubuntu@qemu-module-focal:~$ sudo apt install --reinstall qemu-block-extra
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.
Need to get 54.8 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu focal-proposed/main amd64 qemu-block-extra amd64 1:4.2-3ubuntu6.18 [54.8 kB]
Fetched 54.8 kB in 0s (173 kB/s)
(Reading database ... 142014 files and directories currently installed.)
Preparing to unpack .../qemu-block-extra_1%3a4.2-3ubuntu6.18_amd64.deb ...
Unpacking qemu-block-extra:amd64 (1:4.2-3ubuntu6.18) over (1:4.2-3ubuntu6.18) ...
Setting up qemu-block-extra:amd64 (1:4.2-3ubuntu6.18) ...
ubuntu@qemu-module-focal:~$ find /var/run/qemu/; findmnt -T /var/run/qemu
/var/run/qemu/
/var/run/qemu/Debian_1_4.2-3ubuntu6.18
/var/run/qemu/Debian_1_4.2-3ubuntu6.18/block-ssh.so
/var/run/qemu/Debian_1_4.2-3ubuntu6.18/block-rbd.so
/var/run/qemu/Debian_1_4.2-3ubuntu6.18/block-iscsi.so
/var/run/qemu/README
/var/run/qemu/exec
TARGET SOURCE FSTYPE OPTIONS
/run tmpfs tmpfs rw,nosuid,nodev,relatime,size=203516k,mode=755

Hirsute
ubuntu@qemu-module-hirsute:~$ sudo umount /var/run/qemu/; sudo rm -rf /var/run/qemu; sudo mount -o remount,exec /run
ubuntu@qemu-module-hirsute:~$ find /var/run/qemu/; findmnt -T /var/run/qemu
find: ...

Read more...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

All explicit tests for this completed successfully

The same builds (but in PPA) were not showing in general regression-tests before so I have not re-tested that again.

Overall I consider this verified and update the tags.

tags: added: verification-done verification-done-bionic verification-done-focal verification-done-hirsute
removed: verification-needed verification-needed-bionic verification-needed-focal verification-needed-hirsute
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package qemu - 1:4.2-3ubuntu6.18

---------------
qemu (1:4.2-3ubuntu6.18) focal; urgency=medium

  * enhance loading of old modules post upgrade (LP: #1913421)
    - d/rules: d/qemu-system-gui.{prerm,postrm}.in: do not save gui modules
      (can't be loaded late)
    - d/qemu-block-extra.postrm.in: clear all (current and former) modules
      on purge
    - d/qemu-block-extra.prerm.in: test for exec and prepare /var/run/qemu
      if needed

 -- Christian Ehrhardt <email address hidden> Thu, 19 Aug 2021 14:10:54 +0200

Changed in qemu (Ubuntu Focal):
status: Fix Committed → Fix Released
Revision history for this message
Brian Murray (brian-murray) wrote : Update Released

The verification of the Stable Release Update for qemu has completed successfully and the package is now being released to -updates. Subsequently, the Ubuntu Stable Release Updates Team is being unsubscribed and will not receive messages about this bug report. In the event that you encounter a regression using the package from -updates please report a new bug using ubuntu-bug and tag the bug report regression-update so we can easily find any regressions.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package qemu - 1:5.2+dfsg-9ubuntu3.2

---------------
qemu (1:5.2+dfsg-9ubuntu3.2) hirsute; urgency=medium

  * d/rules fix microvm default machine type for a new build system
    (LP: #1936894) - Thanks to Michael Tokarev for the fix.
  * enhance loading of old modules post upgrade (LP: #1913421)
    - d/rules: clear all (current and former) modules on purge
    - d/rules: test for exec and prepare /var/run/qemu if needed

 -- Christian Ehrhardt <email address hidden> Thu, 19 Aug 2021 11:25:17 +0200

Changed in qemu (Ubuntu Hirsute):
status: Fix Committed → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package qemu - 1:2.11+dfsg-1ubuntu7.38

---------------
qemu (1:2.11+dfsg-1ubuntu7.38) bionic; urgency=medium

  * enhance loading of old modules post upgrade (LP: #1913421)
    - d/qemu-block-extra.prerm.in: clear all (current and former) modules
      on purge
    - d/qemu-block-extra.prerm.in: test for exec and prepare /var/run/qemu
      if needed

 -- Christian Ehrhardt <email address hidden> Thu, 19 Aug 2021 14:30:25 +0200

Changed in qemu (Ubuntu Bionic):
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.