SRU: Backport of Boot Degraded RAID functionality from Intrepid to Hardy

Bug #290885 reported by Dustin Kirkland 
60
This bug affects 2 people
Affects Status Importance Assigned to Milestone
grub (Ubuntu)
Fix Released
Wishlist
Unassigned
Hardy
Fix Released
Undecided
Dustin Kirkland 
grub-installer (Ubuntu)
Fix Released
Wishlist
Unassigned
Hardy
Fix Released
Undecided
Dustin Kirkland 
initramfs-tools (Ubuntu)
Fix Released
Wishlist
Unassigned
Hardy
Fix Released
Wishlist
Dustin Kirkland 
mdadm (Ubuntu)
Fix Released
Wishlist
Unassigned
Hardy
Fix Released
Undecided
Dustin Kirkland 
ubuntu-docs (Ubuntu)
Won't Fix
Undecided
Unassigned
Hardy
Won't Fix
Undecided
Unassigned

Bug Description

Binary package hint: mdadm

We have significantly improved booting on degraded software RAID in Ubuntu Intrepid Ibex. Numerous Hardy users have requested a backport of this functionality to Ubuntu 8.04 LTS.

This will involve updating:
 * mdadm, initramfs-tools, grub

Also, grub-installer will need to be updated as well.

The grub-installer and mdadm-udeb changes would ideally be included in an 8.04.2 update.

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

I have some very preliminary, working packages in my PPA, available for testing. See:
    * https://launchpad.net/~kirkland/+archive
          o grub
          o initramfs-tools
          o mdadm

If you have some spare hardware or virtualization at your disposal, and you're willing and able to test these packages on a non-critical development/test system, I would appreciate hearing your experience.

The test case is essentially as follows:

   1. Install Ubuntu 8.04.1 LTS (Hardy Heron) onto a software RAID1 configuration
   2. Update your package list, upgrade all packages, dist-upgrade, and reboot
   3. Add my PPA to your /etc/apt/sources.list, update your package list, and pull my updated Hardy packages
   4. Re-install grub to your RAID device, "grub-install /dev/md0" or whatever might be appropriate
   5. Reboot with both disks (ensure this continues to work)
   6. Reboot, with only the first of the two disks (you should be prompted if you'd like to boot or not; first test answering "No"; reboot and test answering "Yes")
   7. Reboot with only the second of the two disks, and again test both the "No" and "Yes" behavior
   8. Reboot with both disks attached, and one disk should be "missing" from the array since they are now out of sync, having each been booted independently. Add the missing disk back to the array with something like "mdadm /dev/md0 --add /dev/sdb1", and let it re-sync.

Please post any positive or negative testing results against my PPA packages as a comment to this bug.

:-Dustin

Changed in grub:
assignee: nobody → kirkland
importance: Undecided → Wishlist
milestone: none → ubuntu-8.04.2
status: New → In Progress
Changed in grub-installer:
assignee: nobody → kirkland
importance: Undecided → Wishlist
milestone: none → ubuntu-8.04.2
status: New → In Progress
Changed in initramfs-tools:
assignee: nobody → kirkland
importance: Undecided → Wishlist
milestone: none → ubuntu-8.04.2
status: New → In Progress
Changed in mdadm:
assignee: nobody → kirkland
importance: Undecided → Wishlist
milestone: none → ubuntu-8.04.2
status: New → In Progress
Revision history for this message
Bill Smith (bsmith1051) wrote :

Dustin,
Thanks for working on back-porting this. (P.S. Congratulations on your interview in the Ubuntu newsletter!)

I'm trying to test it on my Ubuntu 8.04.1 setup but I'm not sure what's the appropriate method for updating GRUB. You mention running 'grub-install' against the RAID device (e.g. 'md0') but I had previously run it against the individual drives (e.g. 'sda' and 'sdb'). Does it matter?

Also, does it matter if we've previously tried to patch the 'initramfs' script as previously suggested? In my case, it looks like I have not (on my test system, at least). My main system *does* have the modified script, etc, as outlined on my forum posting circa Ubuntu 7.10,
http://ubuntuforums.org/showthread.php?t=716398

Revision history for this message
Dustin Kirkland  (kirkland) wrote : Re: [Bug 290885] Re: SRU: Backport of Boot Degraded RAID functionality from Intrepid to Hardy

On Thu, Oct 30, 2008 at 3:56 PM, Bill Smith <email address hidden> wrote:
> I'm trying to test it on my Ubuntu 8.04.1 setup but I'm not sure what's
> the appropriate method for updating GRUB. You mention running 'grub-
> install' against the RAID device (e.g. 'md0') but I had previously run
> it against the individual drives (e.g. 'sda' and 'sdb'). Does it
> matter?

Any of those should work.

You can individually install each disk independently with:
 # grub-install /dev/sda
 # grub-install /dev/sdb

Or, more conveniently, you can install to the md device, and let the
new code in grub-install sort it out (recommended):
 # grub-install /dev/md0

> Also, does it matter if we've previously tried to patch the 'initramfs' script as previously suggested? In my case, it looks like I have not (on my test system, at least). My main system *does* have the modified script, etc, as outlined on my forum posting circa Ubuntu 7.10,
> http://ubuntuforums.org/showthread.php?t=716398

So let me stress again that these updates should only be applied to a
dev/test system and tested there.

And yes, it would probably not be a good idea to apply these to a
system that was manually patched. That would invalidate the testing
that I'm looking for...applying these updates to a stock, up-to-date
Hardy test system.

Thanks for volunteering, Bill!

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Uploaded updated grub-installer package to my PPA, containing the backported fixes for installing grub to each disk in an array providing /.

 * https://launchpad.net/~kirkland/+archive

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Stable Release Update requested for:
 * grub-installer
 * grub
 * mdadm
 * initramfs-tools

Per:
 * https://wiki.ubuntu.com/StableReleaseUpdates

 1) This set of bugs affects any Ubuntu 8.04 LTS user with / or /boot on a RAID1 device. RAID is intended to provide both redundancy of the data on those filesystems, as well as failover reliability. 8.04 does not currently handle the failover case very well, as a system with a degraded RAID will not boot, leaving the system in the initramfs. This is a serious issue, and has yielded a very noisy contingent of Ubuntu users asking for this fix on their LTS servers. The remedy to this problem is spread across some 4 separate packages and involves modified code in both the installer and runtime OS. The installer code (new grub-installer udeb) would need to be included in the 8.04.2 install media.

 2) The code changes were surgically backported from Intrepid, where they have been tested extensively over the last 5 months. A full design specification is available at:
 * https://wiki.ubuntu.com/BootDegradedRaid
Spefically the changes involve:
 * grub-installer - in the installer, iterate over each disk in an md device providing /boot, and write grub to each
 * grub - in a running system, enhance grub-install to operate properly on a /dev/md device, or each disk independently
 * mdadm - add failure hooks to the initramfs to handle a missing disk, prompting the user if they want to boot the degraded RAID, or obeying configured options; configure those options via debconf; add such debconf handling to the installer
 * initramfs-tools - add framework bits for handling mountroot failures and attempt recovery steps

 3) I have attached a gzipped tarball of all 4 patches.

 4) TEST CASE: Testing this is a rather long, arduous process. I have documented those processes in detail at https://wiki.ubuntu.com/BootDegradedRaid.

 5) In terms of analyzing regression potential, I would probably need to enlist the assistance of someone on the platform/foundations team. I think the most dangerous of changes are the ones in initramfs-tools, in terms of affecting others. I looked for reasonable callers that might be affected, and I didn't find any, immediately.

These packages are currently available for testing in my PPA. I have functionally verified that they do the right thing on my Hardy vm's. I believe that they're ready for review by Colin, Evan, Luke, and/or Kees, and then upload to hardy-proposed.

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Per comments from Kees, I will upload each of the 4 patches independently.

Also, I will need to update the version of each package to use a "dot". More patches coming.

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Attaching the grub patch.

Requesting sponsorship to hardy-proposed.

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Attaching the grub-installer patch.

Requesting sponsorship to hardy-proposed.

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Attaching the initramfs-tools patch.

Requesting sponsorship to hardy-proposed.

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Attaching the mdadm patch.

Requesting sponsorship to hardy-proposed.

:-Dustin

Revision history for this message
Bill Smith (bsmith1051) wrote :

OK, so the test seemed to go ok (aside from my BIOS trying to boot from my 'data' drives).

OBSERVATIONS
- As you suggested I ran the update command on just the boot array,
 > sudo grub-install /dev/md0
  It correctly identified the two physical drives and updated them both
- The initial boot (still both drives) was fine. But then it hung on shutdown?
  Tried another boot-and-shutdown and this time it shutdown ok.
- 1st test boot (with drive #2 removed). There was the expected 2-min delay
  before prompting to boot degraded. The prompt timed-out before I could finish
  reading the screen so I unintentionally did the "Answer no" test?
  Rebooted, waited, then entered Yes and it booted normally.
- 2nd test boot (with drive #1 removed). Same behavior as previous test.
- 3rd test boot (both drives reconnected). Booted on drive #1 and was able to use
  'mdadm --add' command(s) to restore both arrays successfully.

COMMENTS
- is there a way to simplify the message screen? Maybe add section headers so that
  you can immediately see what each section is about.
- does the prompt need to have a timer?
- is there a single command that could be entered at the Busybox prompt to
  manually initiate the proper boot-as-degraded script? If so, can the system display it after
  you select 'No' (or time-out) ?
- why doesn't Partition Editor (Gparted) recognize 'md' devices? Probably unrelated to this backport
  but you seem like the person to ask! On this 8.04.1+ system my Gparted v0.3.5 says,
  "kernel is unable to re-read the partitiontables on /dev/md0"
  If you're not supposed to use Gparted to edit raid devices, it would be nice if it directly told you so
  and maybe offered to let you view them in read-only mode.
- There's a typo in one of the modules. When I shutdown I saw a command-line message,
  "Network Manager: caught terminiation"

Finally, after I ran these tests (yesterday and today), I was prompted by Update Manager that there was another update for 'initramfs-tools' and 'mdadm' -- was that from you? I didn't want to install them until I knew they weren't a wrong version. They didn't have any Description or Version info in Update Manager, but Synaptic identified them:
- initiramfs 0.85eubuntu39.3~ppa4
- mdadm 2.6.3+200709292116+4450e59-3ubuntu4~ppa4

Are they new updates that you want me to re-run the test with, after downloading them?

Revision history for this message
Martin Pitt (pitti) wrote :

grub: I fixed the changelog to say "backported from intrepid" (not hardy), and removed the spurious manpage header diffs. The patch itself looks good and reasonably isolated to me.

It only affects grub-install, so regression potential is low enough to not suddenly break existing installations. It should properly be tested on RAID and non-RAID machines, though.

Uploaded to the queue.

Changed in grub:
milestone: none → ubuntu-8.04.2
status: New → In Progress
Revision history for this message
Martin Pitt (pitti) wrote :

grub fix backported from intrepid, closing jaunty task.

Changed in grub:
milestone: ubuntu-8.04.2 → none
status: In Progress → Fix Released
Revision history for this message
Martin Pitt (pitti) wrote :

Reuploaded grub with bug number in changelog, rejected previous upload.

Revision history for this message
Martin Pitt (pitti) wrote :

grub-installer looks fine, too, uploaded to queue.

Changed in grub-installer:
assignee: nobody → kirkland
milestone: none → ubuntu-8.04.2
status: New → In Progress
assignee: kirkland → nobody
milestone: ubuntu-8.04.2 → none
status: In Progress → Fix Released
Changed in grub:
assignee: nobody → kirkland
assignee: kirkland → nobody
Revision history for this message
Martin Pitt (pitti) wrote :

initramfs-tools:
 - patch changes previous changelog, and misses bug number
 - add_mountroot_fail_hook(): Changes behaviour of function. Please keep original function and add a add_mountroot_fail_hook_d() or so.
 - panic(): Why the chvt 1? Boot messages are usually on VT8, and this changes behaviour of an existing function.
 - panic(): Removes calling failure hooks without adding a call to the new try_failure_hooks(). Looks fishy and changes existing behaviour.
 - try_failure_hooks(): Unlike the code removed from panic(), this doesn't check if /tmp/mountroot-fail-hooks.d/ actually exists. Thus the script isn't "set -e" safe any more. Are any scripts in initramfs-tools relying on that and using set -e?
 - try_failure_hooks(): Why stop usplash? Shouldn't code just use usplash_write to output text? (This is just a nitpick, and I'm okay with doing it that way)

Changed in initramfs-tools:
assignee: kirkland → nobody
milestone: ubuntu-8.04.2 → none
status: In Progress → Fix Released
assignee: nobody → kirkland
importance: Undecided → Wishlist
milestone: none → ubuntu-8.04.2
status: New → Incomplete
Revision history for this message
Martin Pitt (pitti) wrote :

mdadm:
 - changelog missing bug number
 - debian/initramfs/init-premount: needs new name for add_mountroot_fail_hook_d() (see above)
 - OK otherwise, although large patch which needs thorough testing

Verification should include dpkg-reconfigure, booting in both modes with degraded and non-degraded array.

Changed in mdadm:
assignee: kirkland → nobody
milestone: ubuntu-8.04.2 → none
status: In Progress → Fix Released
assignee: nobody → kirkland
milestone: none → ubuntu-8.04.2
status: New → Incomplete
Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Updated patch attached for initramfs-tools.

> initramfs-tools:
> - patch changes previous changelog, and misses bug number

Fixed.

> - add_mountroot_fail_hook(): Changes behaviour of function. Please keep original function and add
> add_mountroot_fail_hook_d() or so.

Done, fixed.

> - panic(): Why the chvt 1? Boot messages are usually on VT8, and this changes behaviour of an
> existing function.

Removed.

> - panic(): Removes calling failure hooks without adding a call to the new try_failure_hooks(). Looks
> fishy and changes existing behaviour.

panic() is now un-modified. In the degraded-raid case, the hooks inside of here will simply have no effect. The degraded-raid code in mdadm will call the new add_mountroot_fail_hook_d() and try_failure_hooks() functions which should solve matters.

> - try_failure_hooks(): Unlike the code removed from panic(), this doesn't check if /tmp/mountroot-
> fail-hooks.d/ actually exists. Thus the script isn't "set -e" safe any more. Are any scripts in
> initramfs-tools relying on that and using set -e?

I added a -d check for existence of the directory.

> - try_failure_hooks(): Why stop usplash? Shouldn't code just use usplash_write to output text? (This
> is just a nitpick, and I'm okay with doing it that way)

There is some code that shows the state of available md devices using mdadm, in order to help the administrator make an informed decision as to whether or not they want to boot degraded. In order for this mdadm --detail output to be visible, as well as the [y/N] prompt, we kill usplash and switch to vt 1.

---

Thanks for the careful review, Martin. I think the attached patch should be far cleaner.

I've given it a few functional test run, in conjunction with the mdadm patch I'm about to attach. The boot-degraded functionality works as expected in the initramfs.

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Updated patch attached for mdadm.

> mdadm:
> - changelog missing bug number

Fixed.

> - debian/initramfs/init-premount: needs new name for add_mountroot_fail_hook_d() (see above)

Adjusted accordingly.

> - OK otherwise, although large patch which needs thorough testing

Agreed.

> Verification should include dpkg-reconfigure, booting in both modes with degraded and
> non-degraded array.

Tested, verified, and it works as in runtime. Will need to be tested again, in the installer, once the mdadm udeb makes it into some test 8.04.2 install media.

:-Dustin

Changed in initramfs-tools:
status: Incomplete → In Progress
Changed in mdadm:
status: Incomplete → In Progress
Revision history for this message
Martin Pitt (pitti) wrote :

initramfs-tools uploaded.

For mdadm:

-add_mountroot_fail_hook
+add_mountroot_fail_hook_d "10-mdadm"

Where does 10-mdadm come from? I don't see it anywhere in mdadm, the patch, or initramfs-tools.

Revision history for this message
Martin Pitt (pitti) wrote :

Ah, nevermind. That's the target file name. Uploaded mdadm.

Changed in grub:
milestone: ubuntu-8.04.2 → none
status: In Progress → Fix Committed
Changed in grub-installer:
milestone: ubuntu-8.04.2 → none
status: In Progress → Fix Committed
Changed in initramfs-tools:
milestone: ubuntu-8.04.2 → none
status: In Progress → Fix Committed
Changed in mdadm:
milestone: ubuntu-8.04.2 → none
status: In Progress → Fix Committed
Revision history for this message
Martin Pitt (pitti) wrote :

Accepted into hardy-proposed, please test and give feedback here. Please see https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!

Revision history for this message
Bill Smith (bsmith1051) wrote :

Synaptic is still showing me the same 2 updates ("ppa4"). Should the version numbers have changed with these latest changes? Also, I'm still waiting for feedback on my testing and posted questions.

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

On Fri, Nov 7, 2008 at 6:21 PM, Bill Smith <email address hidden> wrote:
> Synaptic is still showing me the same 2 updates ("ppa4"). Should the
> version numbers have changed with these latest changes? Also, I'm still
> waiting for feedback on my testing and posted questions.

Hey Bill-

Sorry that I haven't gotten to your other questions/feedback, I will
do that soon.

For now, please remove my ppa from your /etc/apt/sources.list, and add
"hardy-proposed", as Martin has uploaded all 4 of these updated
packages there.

They will sit there for some time, undergoing testing before being
sync'd out to "hardy-updates" for general consumption.

Thanks,
:-Dustin

Revision history for this message
Martin Pitt (pitti) wrote : Re: [Bug 290885] Re: SRU: Backport of Boot Degraded RAID functionality from Intrepid to Hardy

I bumped the build priority for those packages, they should get
available at archive.ubuntu.com in a few hours.

Revision history for this message
Dustin Kirkland  (kirkland) wrote : Re: [Bug 290885] Re: SRU: Backport of Boot Degraded RAID functionality from Intrepid to Hardy
Download full text (4.7 KiB)

On Thu, Nov 6, 2008 at 1:07 AM, Bill Smith <email address hidden> wrote:
> OK, so the test seemed to go ok (aside from my BIOS trying to boot from
> my 'data' drives).

Great, your efforts here, Bill, are greatly appreciated, and were
instrumental in helping us get these patches from my PPA and into
hardy-proposed.

> OBSERVATIONS
> - As you suggested I ran the update command on just the boot array,
> > sudo grub-install /dev/md0
> It correctly identified the two physical drives and updated them both
> - The initial boot (still both drives) was fine. But then it hung on shutdown?

Hung on shutdown? Hmm, that's probably a separate bug. If you can
reproduce this regularly, please let me know and we can look at filing
another bug.

> Tried another boot-and-shutdown and this time it shutdown ok.

This time it worked? Is it reproducible then?

> - 1st test boot (with drive #2 removed). There was the expected 2-min delay
> before prompting to boot degraded. The prompt timed-out before I could finish
> reading the screen so I unintentionally did the "Answer no" test?

That timeout is set to 15 seconds ... I suppose we can lengthen that,
if it's really necessary.

> Rebooted, waited, then entered Yes and it booted normally.
> - 2nd test boot (with drive #1 removed). Same behavior as previous test.
> - 3rd test boot (both drives reconnected). Booted on drive #1 and was able to use
> 'mdadm --add' command(s) to restore both arrays successfully.

Great! Thanks.

> COMMENTS
> - is there a way to simplify the message screen? Maybe add section headers so that
> you can immediately see what each section is about.

Well, yes and no... I believe this screen is significantly improved
in Intrepid. However, we are very limited as to what we can do with a
previously existing release, such as Hardy. Specifically, we need to
fix the current bug (booting degraded raid) and only the current bug
without breaking or affecting anything else. I took this to mean
leaving the screens and messages alone. My apologies that I can
really do more.

> - does the prompt need to have a timer?

Yes, absolutely. You can configure your machine to either boot
degraded, or not, with dpkg-reconfigure mdadm. The default behavior
is BOOT_DEGRADED=no, which matches the existing behavior of Ubuntu
Hardy and before.

This prompt allows you to select a different behavior, the first time
you boot after a raid degrade event. If you don't make a selection,
it will obey whatever you have set in your mdadm configuration.

> - is there a single command that could be entered at the Busybox prompt to
> manually initiate the proper boot-as-degraded script? If so, can the system display it after
> you select 'No' (or time-out) ?

Hmm, yes, but it's not that simple. That's what my patches do for
you--handles that somewhat complex set of operations.

> - why doesn't Partition Editor (Gparted) recognize 'md' devices? Probably unrelated to this backport

No idea. Yes, unrelated to this patchset.

> but you seem like the person to ask! On this 8.04.1+ system my Gparted v0.3.5 says,
> "kernel is unable to re-read the partitiontables on /dev/md0"

I usually use fdisk /dev/md0....

Read more...

Revision history for this message
JHR (jhroyer) wrote :

Hi,

I use RAID1 + LVM, hardy uses LILO to boot and not GRUB if you have this configuration.

Can you confirm me that this patch you're working on will also work in this configuration ?

Thanks.

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

JHR wrote:
> I use RAID1 + LVM, hardy uses LILO to boot and not GRUB if you have this
> configuration.
>
> Can you confirm me that this patch you're working on will also work in
> this configuration ?

LiLo was unaffected by these changes -- it already contained the
functionality I added to GRUB.

How is your system partitioned? What is your RAID setup? What is
your LVM setup? I can test in a kvm.

--
:-Dustin

Revision history for this message
JHR (jhroyer) wrote :

> How is your system partitioned? What is your RAID setup? What is
> your LVM setup? I can test in a kvm.

Thanks! here are the info:

# fdisk -l /dev/sda

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0007f16b

   Device Boot Start End Blocks Id System
/dev/sda1 * 1 60801 488384001 fd Linux raid autodetect

# fdisk -l /dev/sdb

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000f3f1c

   Device Boot Start End Blocks Id System
/dev/sdb1 * 1 60801 488384001 fd Linux raid autodetect

# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0] sdb1[1]
      488383936 blocks [2/2] [UU]

unused devices: <none>

# vgs
  VG #PV #LV #SN Attr VSize VFree
  maing 1 2 0 wz--n- 465.76G 80.76G

# pvs
  PV VG Fmt Attr PSize PFree
  /dev/md0 maing lvm2 a- 465.76G 80.76G

# lvs
  LV VG Attr LSize Origin Snap% Move Log Copy%
  root maing -wi-ao 375.00G
  swap maing -wi-ao 10.00G

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

JHR:

Thanks, good stuff.

Also, did you install this from the Ubuntu Hardy alternate/server iso,
or some other mechanism?

:-Dustin

Revision history for this message
JHR (jhroyer) wrote :

>Also, did you install this from the Ubuntu Hardy alternate/server iso,
>or some other mechanism?

I used 8.04LTS server 32bit iso.

Revision history for this message
Paul Elliott (omahn) wrote :

I appear to have found a small issue. On my test machine (a physical x86 server), once I've configured mdadm to boot degraded arrays automatically, it seems impossible to change it back. These are the steps I took:

1. Installed 8.04.1 x86 to a physical box with 2 SCSI disks in a Raid-1 mirror.
2. Removed first disk.
3. Bootde the server. The server pauses and displays the degraded array warning and asks if I wish to continue as intended.
4. Continued the boot.
5. Ran 'dpkg-reconfigure mdadm' and enable automatic booting from degraded arrays.
6. Rebooted the server, server reboots fine and doesn't pause when it detects the degraded array.
7. Ran 'dpkg-reconfigure mdadm' again and disable automatic booting from degraded arrays.
8. Rebooted the server, servers reboots and continues to automatically boot, even though the array is still degraded.

Can anyone else replicate?

Revision history for this message
Paul Elliott (omahn) wrote :

Actually, I can replicate :-)

I've just recreated the test case noted in my comment above in a VMware virtual machine with the same results. It appears the debconf DB is correct the change doesn't make it through to the initramfs although it does get rebuilt:

root@pristine804:/home/pre500# debconf-show mdadm
* mdadm/boot_degraded: false
<snip>

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Paul-

I think I understand the wrinkle you're seeing here...

mdadm will only fail to construct a "newly" degraded array.

So the *first* time you boot with a missing disk, mdadm expects the
RAID to be fully operational, notices it's missing a disk, and the new
code we have in the initramfs takes over, checking the configuration
value in that etc file, and interactively prompting you (Do you want
to boot degraded?).

If you do not boot, then mdadm doesn't flag this array as running
degraded, and the next time you reboot, you will see the same
question, about a degraded raid.

If you do choose to boot the raid degraded, mdadm will mark the array,
and "degraded" is now the expected mode of operation. Subsequent
boots will proceed, since you have chosen to boot degraded.

To continue testing, you can reboot your test machine with the second
disk present. It will boot into the degraded array, even with the
second disk (as mdadm doesn't know the state of this additional disk).
 And then you can add the new disk back to the array with mdadm
/dev/md0 --add /dev/sdb1 or some such. You'll want to wait until it's
fully sync'd again (watch -n1 cat /proc/mdstat). Reboot, and you
should boot with both disks in the array. Disconnect one again, and
this will create a new degraded raid event, and rebooting, the
initramfs will see that its missing a disk that it expects.

We pondered different verbage in the development cycles, like "newly
degraded RAID", but decided that was too wordy. A RAID admin should
understand (or come to understand) this workflow.

:-Dustin

Revision history for this message
Bill Smith (bsmith1051) wrote :

"A RAID admin should understand (or come to understand) this workflow."

Please don't say that. Is there some formal course of study that's required before implementing sw RAID? No, of course not. Personally, I am an experienced network manager with decades of experience and various certs but that doesn't mean I automatically understand what's happening here, or that I shouldn't try to use sw RAID. If there's a chance for confusion and a message that can clarify things, please make the effort to improve the wording.

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

My most sincere apologies are offered for any offense taken...

This should absolutely be handled in the official documentation, most
likely in the Ubuntu Server Guide.

:-Dustin

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Adding a task to update the Ubuntu Server Guide for Hardy/Intrepid to clearly explain the new degraded RAID behavior. Specifically, we need to clearly explain that there's a significant difference to mdadm between a "newly degraded RAID event", and subsequent boots on "a RAID that is known to be degraded".

:-Dustin

Changed in ubuntu-docs:
status: New → Triaged
Revision history for this message
Paul Elliott (omahn) wrote :

Here's my findings after testing the packages (grub, grub-installer, initramfs-tools and mdadm) from hardy-proposed.

My testing was carried out on both virtual and physical hardware. The testing on a virtual machine was under VMware using two SCSI disks connected to a LSI Logic controller. The physical test machine was an Intel SuperMicro based 1U server with two SCSI disks also connected to a LSI Logic controller.

I didn't follow the test case exactly as in your first comment, instead of allowing the disks to become out of sync I resynchronised the disks (with mdadm add) after steps 6 and 7 although I don't believe this invalidates my results in any way.

In addition to the test case steps from your first comment I also tested the different 'degraded boot' options (kernel/interactive/static) listed on the Wiki page found here:

https://wiki.ubuntu.com/BootDegradedRaid

I'm delighted to report that everything works as intended.

I have the following comments:

1. I didn't experience any hangs on shutdown as experienced by Bill. It looks like the issue Bill observed is unrelated to these updates.
2. The delay on boot of a newly degraded array is too long and provides no output to the console. Most sysadmins (including myself on the first boot!) would give up waiting long before the degraded array message is displayed and would simply hit the reset button. A 'spinner' or message indicating the reason for the delay would be helpful to show that the server hasn't simply hung.
3. Conversely, the delay at the degraded array prompt is too small. After waiting the 3 minutes for the message to actually appear it seems the delay on the actual message before passing through to the default option is too small. I simply missed it on a couple of occassions as I did other stuff while waiting for the 3 minutes to expire.

Other than that, it all looks good to me. Thanks to Dustin for his work in this area, it's appreciated.

Revision history for this message
Kees Cook (kees) wrote :

The 3 minute delay is to handle older configurations (generally clusters), which we needed to support for the LTS. As a result, the 3 minute delay needs to stay. It was reduced in Intrepid to 30 seconds. For people that want to reduce it, they can add it to the grub config with e.g. "rootdelay=30".

Revision history for this message
Stefano Garavaglia (alterlaunchpad) wrote :

Should this fix work also for degraded raid not on boot partition?

I have a machine with non-raid root partition (indeed it's hw raid) and some extra software raid space which is in fstab to be mounted on pass 2. I'm doing some tests with this setup in a VM.

I tried with stock 8.04.1 and if I remove a disk it halts during the boot with "file system check failed", because
the /dev/md0 is in incative state, so even if it's not a problem with root on raid the boot is blocked as well.

To let the system boot I had to manually mdadm --run /dev/md0, and later boots are ok even if degraded.

I tried with the proposed packages above, but the behavior remained the same even after a dpkg-reconfigure mdadm to
change boot_degraded to let it boot anyway. Also I didn't get any prompt regarding this question.

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Stefano-

No, this is only intended to solve the situation where you need to
boot your system from a degraded RAID.

:-Dustin

Revision history for this message
Stefano Garavaglia (alterlaunchpad) wrote :

Dustin-

I think the problem is anyway quite related to what this is fixing. If a server refuses to boot if a single disk in a raid1 array fails, the perceived problem from the user is quite the same, even if that array is not the one the system is booting from but just another filesystem. The server is stopped just in the same way, so I think it should be nice to have that related problem fixed as well.

Anyway if you think it's a different problem, can you suggest me which is the best way to try to have also that problem fixed?
Maybe filing another bug against mdadm?

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

On Thu, Nov 20, 2008 at 11:33 AM, Stefano Garavaglia
<email address hidden> wrote:
> I have a machine with non-raid root partition (indeed it's hw raid) and
> some extra software raid space which is in fstab to be mounted on pass
> 2.

See http://manpages.ubuntu.com/manpages/hardy/en/man5/fstab.html
regarding "pass 2".

This field does not define when a filesystem is mounted, but rather if
and when the fsck should be performed.

> I tried with stock 8.04.1 and if I remove a disk it halts during the boot with "file system check failed", because
> the /dev/md0 is in incative state, so even if it's not a problem with root on raid the boot is blocked as well.
>
> To let the system boot I had to manually mdadm --run /dev/md0, and later
> boots are ok even if degraded.

Right, or you can set the fs_passno 0, which would remove passing the
filesystem check as a boot requirement, if this is what you want.

:-Dustin

Revision history for this message
Stefano Garavaglia (alterlaunchpad) wrote :

Dustin Kirkland wrote:

> Right, or you can set the fs_passno 0, which would remove passing the
> filesystem check as a boot requirement, if this is what you want.

No, that's not what I want to do, I'd just want the system to boot (or be configured to boot) with the array in degraded mode, regardless if it's on boot partition or on another partition.

Reading your page I see this problem is already in bug https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/259145

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Ah, okay. Yes, you're exactly right, you're speaking of Bug #259145.

The current stack of SRU's is not claiming to solve that bug, while
that bug is valid, confirmed and important. It will be addressed
separately.

:-Dustin

Revision history for this message
Steve Beattie (sbeattie) wrote :

I'm working through verification of this fix, and one thing that I've come across is that if the raid was setup using the older version of grub/grub-installer (e.g. off of 8.04.1 media), grub-install is not re-invoked and so the second disk still does not have a grub stage 1 in its MBR. Thus, if the first disk of a raid1 set fails, the system will still not boot at all.

In talking with Dustin about this on IRC, he suggested that it's probably best handled as a documentation issue. I think it would be useful to to at least warn via debconf/update-notifier if /boot is on an md device and grub is being updated from a version that did not support "grub-install /dev/mdX" properly.

Also, for testing the update to grub-installer, it would be useful if we could do a one-off spin of the hardy images for testing in prep for 8.04.2.

(That's the only issue I've come across so far; now that I've manually run 'grub-install /dev/md0', I'm able to boot in degraded mode off of either disk).

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Martin-

Would would you think of adding a bit to the grub postinst that
detected if you were upgrading from a certain version, and if you have
root (or /boot) on a RAID device, and if so, run grub-install on that
device?

:-Dustin

Revision history for this message
Martin Pitt (pitti) wrote : Re: [Bug 290885] Re: SRU: Backport of Boot Degraded RAID functionality from Intrepid to Hardy

Dustin Kirkland [2008-12-04 12:51 -0000]:
> Would would you think of adding a bit to the grub postinst that
> detected if you were upgrading from a certain version, and if you have
> root (or /boot) on a RAID device, and if so, run grub-install on that
> device?

That doesn't seem to be a problem which got introduced with these
patches? I. e. if you didn't have grub on the "other" device, it
formerly would fail to boot, too?

I am a bit nervous about automatically changing the boot sector of
already installed systems, TBH. There might be cases where people
explicitly configured it that way. So adding a postinst snippet to
detect this situation is fine, of course.

Or is the problem that configuration files actually *say* "please
install grub on all devices", but we just didn't? In that case this
change would be okay for me.

Thanks,

Martin

--
Martin Pitt | http://www.piware.de
Ubuntu Developer (www.ubuntu.com) | Debian Developer (www.debian.org)

Revision history for this message
Steve Beattie (sbeattie) wrote :

On Thu, Dec 04, 2008 at 08:22:32PM -0000, Martin Pitt wrote:
> Dustin Kirkland [2008-12-04 12:51 -0000]:
> > Would would you think of adding a bit to the grub postinst that
> > detected if you were upgrading from a certain version, and if you have
> > root (or /boot) on a RAID device, and if so, run grub-install on that
> > device?
>
> That doesn't seem to be a problem which got introduced with these
> patches? I. e. if you didn't have grub on the "other" device, it
> formerly would fail to boot, too?

No, it wasn't introduced in this fix, but it is getting fixed on new
(8.04.2) installations; it's just that existing installations will be
left with potentially unbootable systems in the event that the wrong
disk fails. It's just somewhat specious to claim that degraded raid boot
is fixed if it leaves users with a russian roulette situation of it not
working in 50% of the situations due to inadequacies in prior versions
of our installer.

> I am a bit nervous about automatically changing the boot sector of
> already installed systems, TBH. There might be cases where people
> explicitly configured it that way. So adding a postinst snippet to
> detect this situation is fine, of course.

I too am leery of changing the boot sector of already installed systems as
well without admin intervention. This is why I was suggesting detecting if
/boot is on an md device and using the normal notification mechanisms at
our disposal. I'm afraid that if we don't do at least that, then admins
may not be aware of the potential situation.

While the issue is mentioned in the Software Raid FAQ at
https://help.ubuntu.com/community/Installation/SoftwareRAID, it's not
mentioned anywhere in the serverguide. I think relying on the help site
documentation is insufficient because you have to seek it out, and since
we don't issue update advisories for non-security updates, we're left
with either postinst notification methods or relying on the changelogs,
which are intended more for developer consumption than users.

Thanks.

--
Steve Beattie
<email address hidden>
http://NxNW.org/~steve/

Revision history for this message
Dustin Kirkland  (kirkland) wrote : Re: [Bug 290885] Re: SRU: Backport of Boot Degraded RAID functionality from Intrepid to Hardy

Martin-

I consciously did *not* edit the postinst to do the grub install, in
order to keep with the rule of "least surprise".

I agree with Steve's comments that it would be quite nice if this
could be done automatically, but I don't think messing with every RAID
user's MBR is something that we should/could do automatically.

In any other circumstance, upgrading the grub package installs new
binaries to the system, but it doesn't reinstall the bootloader.

One thing I've learned over the last 6 months developing, testing, and
debugging this work is that there are some varied and unique RAID
setups out there. It's impossible to catch all of them.

As I said before, I think this part of the "enable my Hardy system for
booting degraded RAID" should be handled via documentation. There is
a server-guide task attached to this bug. We need to add a bit there.
 Probably something in the Community documentation would be good. And
I can certainly blog about it.

Beyond that, code-wise, perhaps we could emit a warning in the
grub-install postinst, that detects if /boot is on a RAID, and
recommend that the user investigate the situation and perhaps run
grub-install on the device. Steve mentioned update-notifier, which
might be interesting on some desktop systems running RAID, but it's
not present on the server.

:-Dustin

Revision history for this message
davekempe (dave-solutionsfirst) wrote : Re: [Bug 290885] Re: SRU: Backport of Boot Degraded RAID functionality from Intrepid to Hardy

Dustin Kirkland wrote:
> Beyond that, code-wise, perhaps we could emit a warning in the
> grub-install postinst, that detects if /boot is on a RAID, and
> recommend that the user investigate the situation and perhaps run
> grub-install on the device. Steve mentioned update-notifier, which
> might be interesting on some desktop systems running RAID, but it's
> not present on the server.
>
>
Is there any way to detect if grub is actually installed on the mbr
correctly? Like without actually doing it?
I personally get bitten by the 'grub silently fails to install on all
the drives in a /boot RAID array' from time to time.
Its a pain in the arse as it requires booting from CD to fix it etc.
And as for weird setups, we do have one machine with 10 drives in RAID1
for /boot.
My preference would be for that even if we have to create a new package
which is 'boot-my-server-at-all-costs' or something then I would install
that as well on my machines.
thanks

Dave

Revision history for this message
Steve Beattie (sbeattie) wrote :

I've walked through the steps at http://wiki.ubuntu.com/BootDegradedRaid and can confirm that the degraded raid boot options work (modulo the issue I raised around grub/MBR). The dpkg-reconfigure steps, the kernel command line options, and the /etc/initramfs/conf.d/mdadm settings all seem to work correctly. After I ran the grub-install, I was able to boot in degraded mode or not, depending on configuration, to either disk in the raid setup. I was able to re-add partitions back into the degraded arrays with mdadm, as well as fail, remove, and reconfigure devices. I didn't notice any regressions.

My testing was in an virtualbox vm so I was unable to hot-add an entire disk to emulate step 13 in the wiki page.

Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub - 0.97-29ubuntu21.1

---------------
grub (0.97-29ubuntu21.1) hardy-proposed; urgency=low

  * debian/patches/grub-install_better_raid.diff: backported from Intrepid;
    install grub on multiple disks in a RAID. (LP: #290885)
  * debian/patches/00list: updated accordingly

 -- Dustin Kirkland <email address hidden> Tue, 04 Nov 2008 14:25:35 -0600

Changed in grub:
status: Fix Committed → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package grub-installer - 1.27ubuntu8.1

---------------
grub-installer (1.27ubuntu8.1) hardy-proposed; urgency=low

  * Backport fixes for booting degraded software RAID (LP: #290885).
  * grub-installer: determine if installing to a /dev/md RAID device, and
    iteratively write grub to each disk in the array.

 -- Dustin Kirkland <email address hidden> Tue, 04 Nov 2008 14:28:31 -0600

Changed in grub-installer:
status: Fix Committed → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package initramfs-tools - 0.85eubuntu39.3

---------------
initramfs-tools (0.85eubuntu39.3) hardy-proposed; urgency=low

  * Functionality backported from Intrepid to Hardy to support booting
    degraded RAID, LP: #290885.
  * scripts/functions: Adjust the mountroot failure hooks framework to
    that used in Intrepid, renaming the function so as not to break other
    callers in Hardy
  * scripts/local: Add get_fstype() and root_missing() helper functions
    and fix the root_missing loop

 -- Dustin Kirkland <email address hidden> Thu, 06 Nov 2008 22:11:27 +0100

Changed in initramfs-tools:
status: Fix Committed → Fix Released
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package mdadm - 2.6.3+200709292116+4450e59-3ubuntu3.1

---------------
mdadm (2.6.3+200709292116+4450e59-3ubuntu3.1) hardy-proposed; urgency=low

  * Fixes for LP: #290885, backported from Intrepid to Hardy
  * Backport functionality to enable booting degraded RAID from Intrepid to
    Hardy
  * debian/control: these fixes require initramfs-tools >= 0.85eubuntu39.3
  * debian/initramfs/init-premount: enhance the init handling to allow for
    booting a degraded RAID, and add the appropriate fail hook
  * debian/mdadm-udeb.dirs, debian/mdadm.config, debian/mdadm.postinst,
    debian/po/*, debian/install-rc, debian/mdadm-udeb.templates:
    partman/install/debconf boot-degraded-raid configurability
  * check.d/root_on_raid, check.d/_numbers: installer script to determine if /
    or /boot is on a RAID device

 -- Dustin Kirkland <email address hidden> Thu, 06 Nov 2008 22:15:08 +0100

Changed in mdadm:
status: Fix Committed → Fix Released
Revision history for this message
Bill Smith (bsmith1051) wrote :

I've just tried this on my other Ubuntu system and it did not work.

ORIGINAL SETUP
- initially installed with Ubuntu 7.10 with manual fix to initramfs
- two SATA drives in RAID-1 mirror
- confirmed that I could boot from 1 drive
- updated to 8.04.1
- GRUB is still installed to both drives but the initramfs change has been removed
- system will no longer boot on just 1 drive

TEST PROCEDURE
- added Hardy-Proposed to my repositories using the checkbox in Synaptic
- installed all updates
- confirmed reboot ok on both drives
- tried on just 1 drive; failure
- reconnected both drives, booted-up, ran 'sudo grub-install /dev/md0'
- confirmed reboot ok on both drives
- tried on just 1 drive; failure
- checked the file-date on /usr/share/initramfs-tools/scripts/local and it's 11-6-08

QUESTIONS
1. Are the updated modules in Hardy-Proposed? For instance, my installed version of 'initramfs-tools' is now 0.85eubuntu39.3
2. Is the failure message/prompt different than your old PPA version? I haven't been able to wait around for it to fail, I've just been watching to see if the new extended-info message is on-screen after it fails.
3. Should my 'local' initramfs file have today's date? I scanned it for the term "mdadm" and came up empty, so I suspect it never got updated.
4. Unrelated to this test, but if I'm planning to reinstall this system from scratch is there a better FS to choose for my RAID-1 than Ext3? I know that's the most stable and reliable but it really is noticeably slow and I haven't necessarily been that impressed with it's 'maturity' either. (For instance, I mistakenly ran 'e2fsck' on one of the individual drives and it corrupted my md partition table. It's a full-blown miracle I was able to figure-out how to reverse the error. Also, I keep losing the use of my swap partition after testing the degraded boot.)

Revision history for this message
Bill Smith (bsmith1051) wrote :

FOLLOW NOTES
- I compared the 'local' script on my 1st (successful) machine and this 2nd machine; they're identical, same file-date, same lack of 'mdadm' reference. So that's not the problem here.
- I sat and waited to see if the new "Boot degraded?" prompt appeared; it did not.
- I tried editing /etc/initramfs-tools/conf.d/mdadm and set BOOT_DEGRADED=true
  Still no success with 1 drive.
- One thing I had not done was run 'dpkg-reconfigure mdadm'. I ran it and selected the Boot Degraded option, then rebooted; failure.

Finally, I tried doing the manual workaround just to make sure my system really could boot degraded, and it did. In other words, from the Busybox prompt I typed:
  (initramfs) mdadm --assemble /dev/md0
  (initramfs) mdadm --assemble /dev/md1
Rebooted and everything worked on the 1 drive.

So I'm not sure why it didn't Just Work automatically.

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

There is no appropriate place to solve this in the Hardy server guide.

Instead, we have provided instructions at:
 * https://help.ubuntu.com/community/DegradedRAID

I'm open to the idea of adding a reference to this in the grub postinst, if it's believed that will help.

:-Dustin

Changed in ubuntu-docs:
status: Triaged → Won't Fix
status: New → Won't Fix
Revision history for this message
Davias (davias) wrote :

Dustin, I have been following this from the beginning, but I'm still puzzled on the way to upgrade my 7.10 AMD64 desktop, working OK with the patch on 2 sata HD.

I would like to:
1) disconnect 1 HD & boot degraded
2) update to 8.04 via network
3) update to 8,10
4) connect the second HD and sync the array

but... upgrading will require a reboot... will I be able to boot in 8.04 and do the next upgrade to 8.10?

Any help appreciated!

Revision history for this message
Dustin Kirkland  (kirkland) wrote : Re: [Bug 290885] Re: SRU: Backport of Boot Degraded RAID functionality from Intrepid to Hardy

On Mon, Jan 19, 2009 at 12:07 PM, Davias <email address hidden> wrote:
> Dustin, I have been following this from the beginning, but I'm still
> puzzled on the way to upgrade my 7.10 AMD64 desktop, working OK with the
> patch on 2 sata HD.
>
> I would like to:
> 1) disconnect 1 HD & boot degraded
> 2) update to 8.04 via network
> 3) update to 8,10
> 4) connect the second HD and sync the array
>
> but... upgrading will require a reboot... will I be able to boot in 8.04
> and do the next upgrade to 8.10?

(1) won't succeed until you have the fixes that have been added to
Hardy and Intrepid. Booting degraded is what's broken in <Hardy, and
fixed thereafter.

I recommend that you backup your most important data, and upgrade to
8.04, reboot, upgrade to 8.10, reboot, and then grub install to your
RAID device.

:-Dustin

Revision history for this message
Davias (davias) wrote :

Thanks for answering Dustin, very fast as usual!

Just to make sure I get it correctly:

1) back-up data (of course...)
2) boot normally my 7.10 amd64 desktop with both disks in Raid1
3) do a live distribution update from Update Manager "New dist rel "8.04 LTS" is available" button
4) reboot after 8.04 upgrade
5) do a live distribution update from Update Manager "New dist rel "8.10" is available" button
6) reboot after 8.10 upgrade
7) do a grub-install /dev/md0 to update grub on both disks instead of grub-install /dev/sda + grub-install /dev/sdb as I did on 7.10

Correct?

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

You got it ;-)

:-Dustin

Revision history for this message
Davias (davias) wrote :

Thank you very much for your help!

Revision history for this message
cyril.cyron (cyril-titus) wrote :

Hi , this is a very important fix and I really appreciate the community for this.

I have a question though ; is it possible to use 'dpkg-reconfigure mdadm' non interactively ?

I saw the man pages , but i would like to Enable Bootdegraded as an option through the command line , without any user interaction , while using dpk-reconfigure mdadm.

Is there something like a single command to install the latest mdadm with bootdegraded enabled ? But it must be non-interactive.

I hope i have clearly communicated the need. Any help would be greatly appreciated,

                                                                                      rgds,
                                                                                         cyril

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.