LVM filesystems not mounted at boot

Bug #147216 reported by Rudd-O
106
This bug affects 19 people
Affects Status Importance Assigned to Milestone
lvm2 (Ubuntu)
Confirmed
Undecided
Unassigned

Bug Description

My system has a root filesystem in an LVM logical volume, backed up by two MD RAID-1 arrays. It fails to boot propertly after upgrading to kubuntu gutsy gibbon prerelease (latest packages as of today). In order to boot it, I have to specify the break=mount boot option, then subsequently run

lvm vgscan
lvm vgchange -a y

to manually enable the LVM volumes. I should note that the MD arrays are started successfully.

Revision history for this message
Rudd-O (rudd-o) wrote :

Basically I suspect that the udev events that are supposed to be getting triggered duing the boot process in the initramfs are not getting triggered, hence a vgscan is never done. My guess is based on the contents of :

rudd-o@karen:/etc/udev/rules.d$ cat 85-lvm2.rules
# This file causes block devices with LVM signatures to be automatically
# added to their volume group.
# See udev(8) for syntax

SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="lvm*|LVM*", \
        RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"

here's the output of pvdisplay

root@karen:/etc/udev/rules.d# pvdisplay
  --- Physical volume ---
  PV Name /dev/md0
  VG Name vg0
  PV Size 362,49 GB / not usable 1,25 MB
  Allocatable yes (but full)
  PE Size (KByte) 4096
  Total PE 92797
  Free PE 0
  Allocated PE 92797
  PV UUID OoPCI9-1Lj3-0eU7-3dbE-dLIz-0AKz-sjcJMz

  --- Physical volume ---
  PV Name /dev/md1
  VG Name vg0
  PV Size 9,54 GB / not usable 1,81 MB
  Allocatable yes (but full)
  PE Size (KByte) 4096
  Total PE 2443
  Free PE 0
  Allocated PE 2443
  PV UUID 5SndoF-iOPs-kGpy-WQzl-S0BJ-wydK-PSaOC6

Revision history for this message
Martin Maney (maney) wrote :

This seems to be the bug that's most similar to my own experience. In my case, a Feisty box was upgraded to Gutsy (network upgrade about a week ago), using, because I am old-fashioned, apt-get dist-upgrade. Unlike most (any?) of the other reports, I do NOT use LVM for the root partition (which includes /boot; /usr, /var, /tmp, /home and swap are in LVs), so after some poking I discovered that at the time checkfs.sh was being called the volumes, both physical and logical, were known to the LVM system (eg., pvs and lvs showed everything as it should be), but they had no entries in /dev/mapper. Typing just "vgchange -a y" at the shell prompt, then control-D to resume, completed the boot normally (but without ever running fsck, of course).

I'm currently using a modified checkfs.sh, with the vgchange command added to the start) case before do_start, and that seems to workaround the issues for me.

Oh, using the Feisty kernel and initfs never did work for me. It's possible that that initfs had been rebuilt, either during the upgrade or during my early thrashing about, and that that somehow confused it. I was able to get the Gutsy image working largely because I had a completely separate image to boot into.

BTW, I have another machine that had a similar root=/dev/sda#, rest in LVM setup that got a fresh Gutsy install rather than upgrade (it was running Dapper), which has never had the least trouble with LVM. I've spent some time tryign to find the key difference between them, but so far no luck.

Revision history for this message
Heliologue (heliologue) wrote :

I'll confirm this bug. I'm running an LVM2 volume of 2 drives, just for media. They work fine when started manually, but the /dev/mapper items aren't there when fsck checks during boot. In other works, similar to Martin, except I didn't have the nifty workaround.

Revision history for this message
Iain Lane (laney) wrote :

I can also confirm this.

My scenario is this: I had all of my filesystems on one partition initially. I then decided to switch to using LVM across my other drives. I made the move in a Gutsy live CD, which went very smoothly - all data was migrated successfully and mount points set up fine. When I booted back into my Gutsy install however, the logical volumes weren't being activated. Martin's workaround seems to work fine.

Revision history for this message
Paul Holcomb (noptys) wrote :

Confirmed here under most recent Hardy as well.

Oddly, it worked fine under Gutsy ( I just upgraded)

If I wait for the boot to time out and then run vgchange -ay, everything works fine.

I've cloned this machine and can re-create the problem in a virtual machine, so I can test any proposed fixes.

In my case I wonder whether its a udev timing issue

Revision history for this message
Seb (sroesner-ubuntu) wrote :

AFAICS it's not a timing issue, but a result of 'ENV{ID_FS_TYPE}=="lvm*|LVM*"'. /dev/md* does not have FS_TYPE "lvm*". So just remove that part from the 85-lvm2.rules file and everything should work fine.

Revision history for this message
Martin Maney (maney) wrote :

Just thought I'd mention that, much to my surprise, this same bug manifested in a fresh Hardy install (well, it was reusing existing partitions, including some non-root LVM ones). Is this thing ever going to be fixed? How about just getting the workaround installed so people aren't faced with a needlessly unbootable system?

Revision history for this message
Garth Snyder (garth-grsweb) wrote :

No, evidently this will never be fixed. :-) Confirmed still a problem in Intrepid.

Seb's suggested change fixes the problem for me, although I'm not sure what that clause is trying to protect against.

Details: nonroot RAID5 array -> /dev/md_d1 -> fastfive volume group -> /dev/fastfive/lvol0 -> /mnt/fastfive

Present in /etc/fstab. RAID array is started correctly at boot, but the logical volume is not activated. After vgchange -a y, mount -a puts things right. With 'ENV{ID_FS_TYPE}=="lvm*|LVM*"' removed from 85-lvm2.rules file, boots smoothly with the logical volume correctly mounted.

Revision history for this message
Carson Brown (carsonb) wrote :

I'm currently experiencing this problem in maverick. I've added the /etc/udev/rules.d/85-lvm2.rules file with what's been suggested above (i.e., without 'ENV{ID_FS_TYPE}=="lvm*|LVM*"'), and updated my initramfs, but it isn't activating the LVM.

Any suggestions?

Revision history for this message
Kenrick Bingham (loxo) wrote :

Thank you, Seb, #6 fixed this for me on Hardy.

Should /dev/md* be changed to have the FS_TYPE "lvm*"?

Revision history for this message
Mark Tomich (mstomich) wrote :

FTR, I found it better to use the filter 'ENV{ID_TYPE}=="disk"' because it avoids the (needless) recursive scanning of all the logical volumes. Thus, I now have the following in /etc/udev/rules.d/85-lvm2.rules:

SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_TYPE}=="disk", \
 RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"

Revision history for this message
Guy Van Sanden (gvs) wrote :

Just been having this exact problem on a clean precise install (daily from 20120228).
Root, Swap and Var are mounted, others aren't I need to activate them manually to make them work...

Revision history for this message
Guy Van Sanden (gvs) wrote :

# lvs
  LV VG Attr LSize Origin Snap% Move Log Copy% Convert
  RootVol VgTrinity -wi-ao 9.31g
  StorageVol VgTrinity -wi--- 50.00g
  SwapVol VgTrinity -wi-ao 3.72g
  VarVol VgTrinity -wi-ao 4.66g
  test VgTrinity -wi--- 8.00g

says it...

Revision history for this message
Vadim Gusev (zarbis) wrote :

I'm using Ubuntu 12.04 and currently experiencing same problem.

Steps i've done to fix it:

1) copy /lib/udev/rules.d/85-lvm2.rules to /etc/udev/rules.d/
2) change ENV{ID_FS_TYPE}=="lvm*|LVM*" to ENV{ID_TYPE}=="disk"

I'm having one 3TB PV with GPT volume table, not LVM. May be it's causing problems, especially it may be reason why step 2 is necessary.

Revision history for this message
Jayen (jayen) wrote :

i chose to change /lib/udev/rules.d/85-lvm2.rules to:
SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_USAGE}="raid", \
 RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"

Revision history for this message
Phillip Susi (psusi) wrote :

Can you check with blkid that the lvm pv is being detected with the correct ID_FS_USAGE?

Changed in lvm2 (Ubuntu):
status: Confirmed → Incomplete
Revision history for this message
wizwiz50 (wizwiz50) wrote :

I'm using Ubuntu 12.04.2 and I'm facing same problem.

I did a fresh install from alternate cd (amd64). Raid 1 and lvm volumes were created before installation.

With kernel 3.5.0-23-generic everything is working but it isn't with 3.5.0.31-generic.
I'm getting those lines :

md1: unknwon partition table
device-mapper: table: 252:0: linear : Device lookup failed

1) I've noticed I could activate volume by doing vgchange -ay on busybox prompt.

2) Adding this to /etc/udev/rules.d/85-lvm2.rules solved the problem.

SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_USAGE}="raid", \
 RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"

How could I check ID_FS_USAGE value ?

Revision history for this message
Phillip Susi (psusi) wrote :

Sorry, you can check that with udevadm info. Since that snippet fixed it, that would indicate that it is being detected as a raid component instead of an lvm pv. I wonder if you have a left over md signature. Which md device is supposed to be the lvm pv? Can you run mdadm -E on that md device and see if it also contains a raid superblock?

Revision history for this message
wizwiz50 (wizwiz50) wrote :

#pvdisplay
    Logging initialised at Wed May 29 23:58:28 2013
    Set umask to 0077
    Scanning for physical volume names
  --- Physical volume ---
  PV Name /dev/md1
  VG Name nas_data
  PV Size 3,64 TiB / not usable 3,68 MiB
  Allocatable yes
  PE Size 4,00 MiB
  Total PE 953605
  Free PE 23045
  Allocated PE 930560
  PV UUID ....

===============================
#mdadm --detail /dev/md1
/dev/md1:
        Version : 1.0
  Creation Time : Sat May 25 14:59:43 2013c9fdeb4d:c60b4edb:550af426:5c627d34
     Raid Level : raid1
     Array Size : 3905969852 (3725.02 GiB 3999.71 GB)
  Used Dev Size : 3905969852 (3725.02 GiB 3999.71 GB)
   Raid Devices : 1
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Thu May 30 00:00:31 2013
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : chewbacca:1
           UUID : ...
         Events : 54

    Number Major Minor RaidDevice State
       0 8 52 0 active sync /dev/sdd4

===========
# mdadm -E /dev/sdd4
/dev/sdd4:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : ...
           Name : chewbacca:1
  Creation Time : Sat May 25 14:59:43 2013
     Raid Level : raid1
   Raid Devices : 1

 Avail Dev Size : 7811939704 (3725.02 GiB 3999.71 GB)
     Array Size : 3905969852 (3725.02 GiB 3999.71 GB)
   Super Offset : 7811939960 sectors
          State : clean
    Device UUID : ...

    Update Time : Thu May 30 00:03:10 2013
       Checksum : 5b64f0e1 - correct
         Events : 54

   Device Role : Active device 0
   Array State : A ('A' == active, '.' == missing)
=======================================
#udevadm info --query all -n /dev/md1
P: /devices/virtual/block/md1
N: md1
L: 100
S: disk/by-id/md-name-chewbacca:1
S: disk/by-id/md-uuid-...
S: md/1
E: DEVLINKS=/dev/disk/by-id/md-name-chewbacca:1 /dev/disk/by-id/md-uuid-... /dev/md/1
E: DEVNAME=/dev/md1
E: DEVPATH=/devices/virtual/block/md1
E: DEVTYPE=disk
E: ID_FS_TYPE=LVM2_member
E: ID_FS_USAGE=raid
E: ID_FS_UUID=Io764F-klxS-hThB-lYfi-UdSH-8gsq-jYeeKD
E: ID_FS_UUID_ENC=Io764F-klxS-hThB-lYfi-UdSH-8gsq-jYeeKD
E: ID_FS_VERSION=LVM2\x20001
E: MAJOR=9
E: MD_DEVICES=1
E: MD_DEVNAME=1
E: MD_LEVEL=raid1
E: MD_METADATA=1.0
E: MD_NAME=chewbacca:1
E: MD_UUID=c9fdeb4d:c60b4edb:550af426:5c627d34
E: MINOR=1
E: SUBSYSTEM=block
E: UDEV_LOG=3
===================

Anything else ?
E: USEC_INITIALIZED=2178849

Revision history for this message
Phillip Susi (psusi) wrote :

In the initramfs busybox shell, you do have /lib/udev/rules.d/85-lvm2.rules right? That should be matching based on the ID_FS_TYPE.

Revision history for this message
wizwiz50 (wizwiz50) wrote :

I removed the custom udev rules to get back in busybox. I'm still getting the same error. There's indeed a rule in /lib/udev/rules.d/85-lvm2.rules while i'm in busybox...

As everything is ok with 3.5.0-23-generic (even if I rebuild initramfs), could it be something like a race condition ?

I've checked in busybox, ID_FS_TYPE is still LVM2_member.

Revision history for this message
SC46 (sc46) wrote :

I believe I'm seeing this as well, with a new install of Ubuntu 13.04. I installed to a new LVM root on a machine, and on startup it can't find the root partition to mount.

In the recovery shell, I could type "lvm vgchange -a y"

the udev rules aren't helping me, so I changed /usr/share/initramfs-tools/scripts/init-premount/lvm2 to do a vgchange in addition to a vgscan.

--- lvm2.orig 2013-06-04 08:12:12.826097349 -0500
+++ lvm2 2013-06-04 08:11:50.429987991 -0500
@@ -10,6 +10,8 @@

 mountroot_fail()
 {
+ /sbin/lvm vgscan >/dev/null 2>&1
+ /sbin/lvm vgchange -a y
  if ! /sbin/lvm vgscan >/dev/null 2>&1 ; then
   cat <<EOF
 There appears to be one or more degraded LVM volumes, and your root device may

Phillip Susi (psusi)
Changed in lvm2 (Ubuntu):
status: Incomplete → Confirmed
Revision history for this message
Anthony Kamau (ak-launchpad) wrote :

Just wanted to add that after doing a "vgrename" without first doing a vgchange -an, I ended up with a system that could not activate the LV's on boot. I had to drop to a root shell (via 'M') then type "vgchange -ay" followed by "ctrl+d" to get a system (re)boot to complete successfully. I'll add that I'm not sure if the vgrename without first making unavailable caused the issue!!!

Anyhow, after much googling, I came across this bug and after following the suggestion by Mark Tomich, namely:

SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_TYPE}=="disk", \
 RUN+="watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y'"

I can now (re)boot without it stopping to wait for manual intervention.

My system:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.4 LTS
Release: 12.04
Codename: precise

$ uname -a
Linux akk-m6700 3.11.0-19-generic #33~precise1-Ubuntu SMP Wed Mar 12 21:16:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

I'm running the kernel from - linux-image-generic-lts-saucy.

Cheers,
ak.

Revision history for this message
Anthony Kamau (ak-launchpad) wrote :

Forgot to add that the LV's are for media files and virtualbox vms so the system could otherwise boot (if I chose to skip mounting the LV's via 'S'). I'd than still have to run 'vgchange -ay' then manually mount the partitions after the system was up.

Cheers,
ak.

Revision history for this message
Anthony Kamau (ak-launchpad) wrote :

Being that I'm still having this issue on Ubuntu 15.10, I can only conclude that this appears to be a design philosophy at Canonical rather than a bug. Logical volumes that are not necessary for system boot and/or are not on the primary disk where the OS is installed are not made available / mounted on (re)boot!

It is much better that the system comes up rather than waiting for a user input at (re)boot.

Case closed, at least for me!

ak.

Revision history for this message
Phillip Susi (psusi) wrote :

This is weird.. I see no reason why the rule shouldn't work as is, and it works fine for me without any modifications...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.