linux software RAID not working after herd 3 installation..

Bug #83231 reported by Quikee
60
Affects Status Importance Assigned to Milestone
initramfs-tools (Ubuntu)
Fix Released
Undecided
Unassigned
Nominated for Feisty by Michael Olson
lvm2 (Ubuntu)
Fix Released
Undecided
Unassigned
Nominated for Feisty by Michael Olson
udev (Ubuntu)
Fix Released
Undecided
Unassigned
Nominated for Feisty by Michael Olson

Bug Description

Binary package hint: initramfs-tools

I decided to use RAID for my 2 drives in the system and after some researching I figured the easiest way to do this is to reinstall Ubuntu with the latest herd 3 alternative.

I decided to try with the following configuration on my 2 drives:

/boot - RAID 1
/ - RAID 0
/backup - RAID 1

Everything went smooth (except for long hardware detection which is already a known bug) and the installation finished successfully. But after reboot, grub wasn't updated correctly. I fixed this issue with Ubuntu live CD but it is annoying (might be worth a bug report on its own).

The next much bigger annoyance that took me hours to figure out is that mdadm tried to assemble the arrays BEFORE the drives were detected by the kernel. When booting, mdadm just reported that: "no devices found.. " (or something like that). To fix this issue I had to put "sleep 10" in "/usr/share/initramfs-tools/init" file before line: log_begin_msg "Mounting root file system..."
After that I updated the initramfs.

This fixed the issue but the fix is not the right solution..

Thanks and have a nice day.

P.S. It would be nice that "mdadm" would already be included into Ubuntu Live/Installation CD. Now I had to enable universe repositories, update and downloading "mdadm" with apt/synaptic everytime I booted the live CD to try to fix my issue (if the live CD wouldn't correctly detect my network I would took me even longer to figure out and fix the problem).

Tags: dmraid mdadm
Revision history for this message
Codink (herco) wrote :

I can confirm this in Herd 4.

The exact error message:
mdadm: No devices listed in conf file were found.

Revision history for this message
nyinge (wow-naxx) wrote :

I can comfirm this as well. However, in my case it is with dmraid but with similar nature. The problem still exists as of today(Feisty herd-5 release date) with packages fully upgraded. Quikee's quick fix solves the problem, but of course, a thorough patch would be super.

Revision history for this message
risidoro (risidoro-gmail) wrote :

I confirm Nyinge comment! I had a similar problem with feisty on a fakeraid partition (dmraid) and i solved with Quikee 'hack'.

Revision history for this message
Velociraptor (warren-haslam) wrote :

I too confirm this bug for dmraid on Feisty Herd 5 (amd64). When booting from the HDD the only message I receive to the console is "no block devices found". (I believe this message is issued by dmraid.) The workaround suggested by Quikee solves the problem.

I used the update-manager to upgrade from Edgy. Note: Edgy didn't exhibit this problem.

Revision history for this message
risidoro (risidoro-gmail) wrote :

Quikee's method works even with only "sleep 2" instead of "sleep 10".

Revision history for this message
nyinge (wow-naxx) wrote :

Velociraptor said, "Edgy didn't exhibit this problem."

Yes. I didn't find this problem in Edgy as well.

Revision history for this message
Alvin (alvind) wrote :

I can confirm this in Feisty Herd 5 (i386).
mdadm: No devices listed in conf file were found.

Revision history for this message
Óscar Rodríguez Ríos (ingorr01) wrote :

Confirmed Feisty Herd 5 (AMD64) updated 12/mar/2007. Doesn't boot.

mdadm: no devices found for /dev/md1

Hardware:
Dual Core AMD Opteron 175
Raid 5
Kernel 2.6.20-9-server

Very annoying bug.
The "trick" posted by Quikee works.

Best regards,
neuromancer

Revision history for this message
Jeremy Vies (jeremy.vies) wrote :

I'm having the same problem on a dmraid root partition.

I've had a look at bug #85640 about problem on crypted root partition. It seems their problem is due to the fact that /dev is not completely populated when it try to decrypt. I wonder if we don't have the same kind of problem here...

They solved it by using udevsettle. In my opinion, it would be a nicer solution than a sleep. so I'll give it a try this evening.

Revision history for this message
Jeremy Vies (jeremy.vies) wrote :

The "udevsettle" works. I'll try (tomorrow) to add the udevsettle at the end of the udev populating script in /usr/share/initramfs-tools/scripts/init-premount/udev.

Revision history for this message
Jeremy Vies (jeremy.vies) wrote :

the "udevsettle --timeout 10" at the end of /usr/share/initramfs-tools/scripts/init-premount/udev works too.

We have now several solutions to propose to udev and dmraid packagers.

Revision history for this message
Quikee (quikee) wrote :

I tried "udevsettle" and it is a much better solution. It should be "udevsettle --timeout=10".

Revision history for this message
nyinge (wow-naxx) wrote :

udevsettle --timeout=10

Works for me too.
@ Quikee: Why is it a better solution? I'm just curious. Speed-wise, I think they're about the same.

Revision history for this message
Jeremy Vies (jeremy.vies) wrote :

It's better as it is cleaner. For me the problem is from udev, so the correction should be in udev.

udevsettle --timeout=10 waits for udev to end its works with a maximum of 10sec. So it should be shorter than the sleep 10 solution.

Changed in initramfs-tools:
status: Unconfirmed → Confirmed
Revision history for this message
James (boddingt) wrote :

Just installed ubuntu-7.04-beta-server-i386 with / on software raid. I am having the same problem. Mdadm being run before the drives are detected. Leaving with a new install that won't boot.

Revision history for this message
James (boddingt) wrote :

To the people that discussed the work around, thank you. I used the udevsettle work around and it worked. That finished a rather frustrating afternoon.

Revision history for this message
James (boddingt) wrote :

This is disturbing.

I was going to ask what happens if initramfs-tools is updated and the problem is not fixed.

Just did an update on a working machine with the above work around and initramfs-tools was one of the packages updated. This triggered an update of the initrd. That computer no longer boots, instead giving the "mdadm: No devices listed in conf file were found." again.

Revision history for this message
Jeremy Vies (jeremy.vies) wrote : Re: [Bug 83231] Re: linux software RAID not working after herd 3 installation..

If you've put udevsettle in the udev script of initramfs-tools, you need to
replace the line at the end of file each time the package udev is updated.

2007/3/28, James <email address hidden>:
>
> This is disturbing.
>
> I was going to ask what happens if initramfs-tools is updated and the
> problem is not fixed.
>
> Just did an update on a working machine with the above work around and
> initramfs-tools was one of the packages updated. This triggered an
> update of the initrd. That computer no longer boots, instead giving the
> "mdadm: No devices listed in conf file were found." again.
>
> --
> linux software RAID not working after herd 3 installation..
> https://launchpad.net/bugs/83231
>

Revision history for this message
Quikee (quikee) wrote :

I created a file "udevSettle" (attached) in the same folder as udev (/usr/share/initramfs-tools/scripts/init-premount/), which is executed after udev and should be unaffected if a package is updated. I don't know if it still works correctly - it works for me.

Revision history for this message
Kralin (andrea-pierleoni) wrote :

I've got the same problem with the desktop beta amd64.
I hope this will be fixed before final release...

I'll try your methods to correct it, thank you!!!

Revision history for this message
Ryan Ackley (raackley) wrote :

Confirmed this with the i386 3-29 daily build.

Revision history for this message
Manoj Kasichainula (manoj+launchpad-net) wrote :

Got kicked out of #75681, rereporting here:

The race condition fixes in #75681 did not fix my boot problems, which are like those above (raid not detected). Adding a udevsettle script as above fixed my problem. I dist-upgraded and tested about 12 hours ago. My package list now matches that with the bugs in 75681 fixed (except for not using LVM)

> dpkg -l dmsetup libdevmapper1.02 lvm-common lvm2 mdadm udev volumeid libvolume-id0
No packages found matching lvm-common.
No packages found matching lvm2.
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed
|/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad)
||/ Name Version Description
+++-=======================-=======================-==============================================================
ii dmsetup 1.02.08-1ubuntu6 The Linux Kernel Device Mapper userspace library
ii libdevmapper1.02 1.02.08-1ubuntu6 The Linux Kernel Device Mapper userspace library
ii libvolume-id0 108-0ubuntu1 volume identification library
ii mdadm 2.5.6-7ubuntu5 tool to administer Linux MD arrays (software RAID)
ii udev 108-0ubuntu1 rule-based device node and kernel event manager
ii volumeid 108-0ubuntu1 volume identification tool

Revision history for this message
Wilb (ubuntu-wilb) wrote :

Same thing here, just done an install of Feisty server beta and done an apt-get update && apt-get dist-upgrade, came up with the same problems as before - have to manual mount the arrays in the busybox shell or use the udevsettle temporary workaround. Just a basic install with a small RAID1 array for /boot and a 7gb one for / - Silicon Image SATA controller on an Asus A7N8X (althoguh not configured for any on board RAID, purely mdadm software raid)

Revision history for this message
Jeffrey Knockel (jeff250) wrote :

Thanks Quikee. I tried your latest, most elegant workaround, and it worked great here.
I don't think that this is an lvm problem, especially since me and a lot of the others here aren't using lvm.
I am going to add udev to affected packages, since upgrading to udev-105 from debian unstable also fixed this bug. (However, this had other understandable side-effects.)

Revision history for this message
Scott James Remnant (Canonical) (canonical-scott) wrote :

GUYS, PLEASE STOP HIJACKING EXISTING BUG REPORTS!

IF YOU ARE HAVING PROBLEMS BOOTING WITH MDADM, LVM2, EVMS OR DEVMAPPER PLEASE

 ! FILE ! A ! NEW ! BUG !

If somebody else has already filed a new bug, DO NOT hijack it, please FILE ANOTHER NEW ONE.

We have had several separate, different, problems and many of those are fixed -- you may be hijacking a bug where the original submitter actually has a different underlying problem than you.

Attempts to comment on fixed bugs relating to this, or reopen those, will be ignored.

This is not a major thing to ask; we want to get all the problems fixed, and the best way you can help us do that is to file your own, unique, bug so we are aware of ALL of the problems and can examine each of them independently without the confusion of someone else with a different problem being louder in the report.

Changed in initramfs-tools:
status: Confirmed → Fix Released
Changed in lvm2:
status: Unconfirmed → Fix Released
Changed in udev:
status: Unconfirmed → Fix Released
Revision history for this message
GSMD (gsmdib) wrote :

My box did boot fine showing that mdadm message though. The trouble began as I installed the server kernel and initrd images got regenerated. The box became unbootable. The proposed fix worked out, thanks for that, but I'm still getting an error message at system shutdown (like Stopping MD0 [fail]) and have no clue what's the reason for that. That's unlikely to be related to this issue, but don't you get the same?

Revision history for this message
GSMD (gsmdib) wrote :

Even more on this. I've i've got Feisty on 2 of my servers (IDE and SCSI drives in RAID1), these are the servers, so they aren't rebooted frequently. What I found was that the machine with SCSI drives would eventually fall to busybox upon reboot and I'd have to reboot it manually to get working. Today I rebooted the machine with IDE drives and it seems like the same thing happened (I didn't manage to physically get to the server yet).
To sum this up, the bug is more nasty than it seems and "udevsettle --timeout 10" doesn't help sort it out.

Any ideas?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Bug attachments

Remote bug watches

Bug watches keep track of this bug in other bug trackers.