[->UUIDudev] /dev/md_* devices are not created

Bug #486361 reported by Rich Wales
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
Confirmed
Medium
Unassigned

Bug Description

Binary package hint: mountall

I upgraded a server from Jaunty (9.04) to Karmic (9.10) last night. After the upgrade, the server will no longer finish booting on its own. It appears to hang forever in mountall (version 1.0), saying "One or more of the mounts listed in /etc/fstab cannot yet be mounted" and listing all my filesystems.

I hit ESC for a recovery shell, fsck'ed every filesystem, and rebooted, but the same thing kept happening.

I was finally able to bring up the system by going into a recovery shell, remounting the (initially read-only) root via a "mount" command with the "-o remount,rw" option, mounting the other filesystems manually as well, and exiting the recovery shell via control-D. But when I restarted the server, the same thing happened again. So, I can reliably bring the system up by hand, but it won't come up by itself.

I'm using the standard Karmic server kernel (2.6.31-14-generic-pae). However, the problem persisted when I tried booting my previous kernel (2.6.28-16-server) after doing the Karmic upgrade, so it doesn't appear to be a kernel bug.

The only unusual thing I'm aware of about my server configuration is that all of its filesystems (including the root) are RAID1 mirrors handled by mdadm. The server did, however, boot up just fine (all the way, by itself) when it was running Jaunty, before the upgrade to Karmic. And when I go into a recovery shell, it's evident that the RAID1 mirrors are there -- "cat /proc/mdstat" shows everything as expected, and commands like "mount /dev/md_d2 /var" work fine. It's not clear to me that there should be any reason why mountall can't handle RAID1 mirror devices, and maybe the problem really lies elsewhere, but . . . .

Please note that I am *not* using UUID's to identify my filesystems in /etc/fstab. I mention this because I've seen a few people mention similar-sounding mountall problems that were resolved by editing their /etc/fstab to use the actual device names rather than UUID's; I'm already doing that, and I'm still having trouble.

I also did "aptitude update" and "aptitude dist-upgrade" again on the system, just to be sure that some part of the upgrade to Karmic might not have finished properly -- another thing that some people have suggested in connection with similar-sounding problems -- but this didn't fix the problem.

Revision history for this message
Rich Wales (richw) wrote :

I think I may have found the solution (or, at least, a workaround). I replaced all instances of device names of the form /dev/md_dX with corresponding names of the form /dev/mdX (e.g., /dev/md_d1 -> /dev/md1). This involved editing /boot/grub/menu.lst, /etc/mdadm/mdadm.conf, and /etc/fstab, and then running "update-initramfs -k all -u" to rebuild the initramfs'es. When I rebooted, the system came up on its own.

There may still be a bug / misfeature in mountall, however: Why did mountall hang when the root was on /dev/md_d1, but it worked fine when the root was on the (supposedly equivalent) /dev/md1 (and similarly for other filesystems on other RAID1 mirrors)?

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

Please attach your /etc/fstab and output of "sudo mountall --debug"

Where did you read documentation that /dev/md_dX were supported, I've never even heard of those devices and they don't exist on my software RAID system

Changed in mountall (Ubuntu):
status: New → Incomplete
importance: Undecided → Medium
Revision history for this message
Rich Wales (richw) wrote :

The /dev/md_dNN series of device names are mentioned near the end of the mdadm(8) man-page, in the "DEVICE NAMES" section.

As it turns out, these devices aren't there now -- but they did exist on my server before I upgraded it from Jaunty to Karmic. (You'll just have to take my word for this -- I can't revert my server to Jaunty in order to create and produce the evidence.) The /dev/md_dNN devices, as best I can recall, were identical to the /dev/mdNN devices except that they were mode 600 instead of 660.

The fact that these devices aren't on my server now suggests to me that the problem I encountered here may not, in fact, be a "mountall" problem at all, but a bug / feature of whatever part of Karmic is responsible for setting up the RAID device names. See bug #430542.

Revision history for this message
stop (whoopwhoop) wrote :

Lucid Lynx 64bit (RAID 1 on home partition)

I get this message also, strange thing is I don't get it with Karmic (or older) but I do get it with Lucid. Every cold boot results in:

One or more of the mounts listed in /etc/fstab cannot yet be mounted:
/home: waiting for /dev/md0
Press ESC to enter recovery shell

Reboots nearly never gives me this message.

Revision history for this message
Davias (davias) wrote :

Exactly the same situation: I had my workstation perfectly working with 9.04 AMD64 with all latest updates and did a Dist Upgrade from Upgrade Manager. The upgrade did not complete for some reasons logged into /var/.... could not do anything but restart the system.
At boot, the system could not mount any of the md devs I have, (0,1,2,3, respectively /, swap, /home, /stripe).
Could do nothing since the busybox recovery terminal was read only. Could see that the mds were ok using cat /proc/mdstat but could not assemble the arrays to boot the computer. So after reading the above I did:

1) fsck (maybe not needed - but found many errors)
2) manually mount devs rw with
sudo mount /dev/md0 -o remount,rw
sudo mount /dev/md2 -o remount,rw
sudo mount /dev/md3 -o remount,rw

Note: /dev/md1 is swap, already mounted rw

3) aptitude update -> could not run, so sudo dpkg --configure -a and apt dbase rebuilded
4) aptitude update
5) aptitude upgrade and restart after finish - OK!

But to do all this I had to have:
1) another computer available
2) pretty good system knowledge
3) cold blood

I can imagine the average user clicking "upgrade" and then have a non bootable system!

Ubuntu+RAID=not for fainted hearts

Rich, thank you so much for posting this info! You're the man...

Revision history for this message
stop (whoopwhoop) wrote :
Revision history for this message
stop (whoopwhoop) wrote :
Changed in mountall (Ubuntu):
status: Incomplete → New
Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote : Re: /dev/md_* devices are not created in /dev

Folks, this bug (as reported by the original reporter) is that RAID devices named /dev/md_d* are not being created in karmic. I've reassigned this over to the software raid tools (which I notice are due for a fairly large update, so this might be an out-of-date tools issue).

If you are not the original reporter, and you are having problems with software raid devices of different names, then you do not share the same bug and should report a new one for the problem you are experiencing

summary: - raid1 filesystems "cannot yet be mounted" after karmic upgrade
+ /dev/md_* devices are not created in /dev
affects: mountall (Ubuntu) → mdadm (Ubuntu)
Revision history for this message
ceg (ceg) wrote :

The thread http://ubuntuforums.org/showthread.php?p=8407182 mentioned on https://wiki.ubuntu.com/ReliableRaid talks about /dev/md_d* type autocreated devices, as a symptom of "un-defined/un-white-listed" arrays with ubuntu's hotplug setup.

ceg (ceg)
summary: - /dev/md_* devices are not created in /dev
+ /dev/md_* devices are not created
summary: - /dev/md_* devices are not created
+ [->UUIDudev] /dev/md_* devices are not created
Revision history for this message
ceg (ceg) wrote :

Because md* and md_d* device names aren't uniqe and can collide, if superblocks and mdadm.conf ARRAY lines don't match devices are not created.

This should be solved with pure UUID-based raid assembly. Bug #158918

Changed in mdadm (Ubuntu):
status: New → Confirmed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.