md_d0 array fabricated, prevents mounting md0 partitions

Bug #615186 reported by Michael DePaulo
44
This bug affects 8 people
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
In Progress
Undecided
Surbhi Palande

Bug Description

Binary package hint: mdadm

I've had this problem happen a few times in the past with previous versions of Ubuntu. (I'm now on Lucid) I forget how I got rid of it then. What happened now is a just added a couple of eSATA disks (/dev/sdd and /dev/sde) (they function like regular SATA disks.) I had setup my /dev/md0 array from /dev/sdb and /dev/sdc months ago. Now all of a sudden it fabricates a /dev/md_d0 array somehow (based on /dev/md0p1) and the system fails to mount /dev/md0p1. This is the output from cat /proc/mdstat:
mike@hegemon:~$ cat /proc/mdstat
Personalities : [linear] [raid1] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md_d0 : inactive md0p1[1](S)
      245111616 blocks

md0 : active raid1 sdc[1] sdb[0]
      245117312 blocks [2/2] [UU]

unused devices: <none>
My workaround for now is to manually remove /md_d0 every time I reboot the system as follows:
mike@hegemon:~$ sudo mdadm --manage --stop /dev/md_d0
mdadm: stopped /dev/md_d0
I can then run mount -a and it successfully mounts /dev/md0p1
I get this output after deleting the array, I believe it is the same as before I delete it:
mike@hegemon:~$ sudo ls /dev/md*
/dev/md0 /dev/md_d0 /dev/md_d0p2 /dev/md_d0p4
/dev/md0p1 /dev/md_d0p1 /dev/md_d0p3

/dev/md:
d0 d0p1 d0p2 d0p3 d0p4

ProblemType: Bug
DistroRelease: Ubuntu 10.04
Package: mdadm 2.6.7.1-1ubuntu15
ProcVersionSignature: Ubuntu 2.6.32-24.39-generic 2.6.32.15+drm33.5
Uname: Linux 2.6.32-24-generic x86_64
Architecture: amd64
Date: Sun Aug 8 21:05:33 2010
InstallationMedia: Ubuntu 10.04 LTS "Lucid Lynx" - Release amd64 (20100429)
MDadmExamine.dev.sda: Error: command ['/sbin/mdadm', '-E', '/dev/sda'] failed with exit code 1: mdadm: No md superblock detected on /dev/sda.
MDadmExamine.dev.sda1: Error: command ['/sbin/mdadm', '-E', '/dev/sda1'] failed with exit code 1: mdadm: No md superblock detected on /dev/sda1.
MDadmExamine.dev.sda2: Error: command ['/sbin/mdadm', '-E', '/dev/sda2'] failed with exit code 1: mdadm: No md superblock detected on /dev/sda2.
MDadmExamine.dev.sda3: Error: command ['/sbin/mdadm', '-E', '/dev/sda3'] failed with exit code 1: mdadm: No md superblock detected on /dev/sda3.
MDadmExamine.dev.sda5: Error: command ['/sbin/mdadm', '-E', '/dev/sda5'] failed with exit code 1: mdadm: No md superblock detected on /dev/sda5.
MDadmExamine.dev.sdd: Error: command ['/sbin/mdadm', '-E', '/dev/sdd'] failed with exit code 1: mdadm: No md superblock detected on /dev/sdd.
MDadmExamine.dev.sdd1: Error: command ['/sbin/mdadm', '-E', '/dev/sdd1'] failed with exit code 1: mdadm: No md superblock detected on /dev/sdd1.
MDadmExamine.dev.sde: Error: command ['/sbin/mdadm', '-E', '/dev/sde'] failed with exit code 1: mdadm: No md superblock detected on /dev/sde.
MDadmExamine.dev.sde1: Error: command ['/sbin/mdadm', '-E', '/dev/sde1'] failed with exit code 1: mdadm: No md superblock detected on /dev/sde1.
MachineType: Gigabyte Technology Co., Ltd. 965P-S3
ProcCmdLine: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=6a13dd7d-05c7-4f03-af14-031d3821217b ro quiet splash
ProcEnviron:
 PATH=(custom, no user)
 LANG=en_US.utf8
 SHELL=/bin/bash
ProcMDstat:
 Personalities : [linear] [raid1] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
 md0 : active raid1 sdc[1] sdb[0]
       245117312 blocks [2/2] [UU]

 unused devices: <none>
SourcePackage: mdadm
dmi.bios.date: 06/25/2009
dmi.bios.vendor: Award Software International, Inc.
dmi.bios.version: F14
dmi.board.name: 965P-S3
dmi.board.vendor: Gigabyte Technology Co., Ltd.
dmi.chassis.type: 3
dmi.chassis.vendor: Gigabyte Technology Co., Ltd.
dmi.modalias: dmi:bvnAwardSoftwareInternational,Inc.:bvrF14:bd06/25/2009:svnGigabyteTechnologyCo.,Ltd.:pn965P-S3:pvr:rvnGigabyteTechnologyCo.,Ltd.:rn965P-S3:rvr:cvnGigabyteTechnologyCo.,Ltd.:ct3:cvr:
dmi.product.name: 965P-S3
dmi.sys.vendor: Gigabyte Technology Co., Ltd.
etc.blkid.tab: Error: [Errno 2] No such file or directory: '/etc/blkid.tab'

Revision history for this message
Michael DePaulo (mikedep333) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Download full text (5.1 KiB)

Hie Michael DePaulo,
Thanks a lot for your bug report. I have created a mdadm test package, which I suppose should fix your bug. Will you please care to try this package? Remember that this is a test package. If this fixes the bug for you (and for others too), we will reflect these changes in mdadm updates.

JFYI, for hitherto Ubuntu releases the mdadm package shall stay at 2.7.1 However Natty would have mdadm at 3.4.1. This procedure is intended to test the mdadm fixes for 2.7.1. Here is the rough procedure that needs to be followed:

Testing auto-assembly of your md array when your rootfs lies on it:
1)Install the mdadm package and initramfs package kept at: https://edge.launchpad.net/~csurbhi/+archive/mdadm-autoassembly
2)Run /usr/share/mdadm/mkconf and ensure that your /etc/mdadm/mdadm.conf has the array definition.
a) Save your original initramfs in /boot itself by say /boot/initrd-old.img.
b) Then run update-initramfs -c -k <your-kernel-version>. Store this iniramfs as /boot/initrd-new.img. We shall use this initramfs as a safety net. If you cannot boot with the auto-assembly fixes, then you should not land in a foot in your mouth situation. Through grub's edit menu, you can then resort to this safety net by editing the initrd=initrd-new.img (or if this does not work for some random reason then resort back to your older initrd=initrd-old.img) This way you will be sure that you can still boot your precious system.
c) Now comment or remove the ARRAY definitions from your /etc/mdadm/mdadm.conf and once again run the same “update-initramfs -c -k <your-kernel-version>” to generate a brand new initramfs.
3)Run mdadm –detail –scan and note the UUIDs in the array. Note the hostname stored in your array. Does it not match with your real hostname? Then we can fix that at the initramfs prompt that you inevitably will land at if you try auto-assembly. Also note the device components that form the root md-device. Keep this paper for cross checking when you reboot
4)Reboot.
5)If you are at the initramfs prompt here are the things that you should first ensure:
a) ls /bin/hostname /etc/hostname - are these files present?
b) run “hostname”. Does this show you the hostname that your system is intended to have? Is it the same as the contents of /etc/hostname.
c) ls /var/run – Is this dir there?
If you answer yes to the above three questions, then things are so far so good. Now run the following command:
mdadm –assemble -U uuid /dev/<md-name> <dev-components-listed here>
Your mdadm –detail –scan that you ran previously should have given you the component names if you dont know it right now. Hopefully you have them listed on your paper.
Eg in my case I ran:
mdadm –assemble -U uuid /dev/md0 /dev/sda1 /dev/sdb1
Again run:
mdadm –detail –scan <md-device> and verify that the uuids are indeed updated and the hostname reflects the hostname that is stored /etc/hostname. You can now press Ctr+D and you should come back to the root prompt. However you still need to test auto-assembly of your root md device. To do that simple reboot and you should not see the face of initramfs this time. You should land gently on your root prompt as you ex...

Read more...

Changed in mdadm (Ubuntu):
status: New → Confirmed
assignee: nobody → Surbhi Palande (csurbhi)
status: Confirmed → In Progress
Revision history for this message
ugmoe2000 (ericpulvino) wrote :

Surbhi, your modified mdadm package alone fixed the bug for me. THANK YOU THANK YOU THANK YOU -- this bug has been affecting me for what seems like a few years now. My workaround prior to this was a startup script in rc.local which auto deleted the defunct /dev/md_d0 & /dev/md_d1 devices then scanned and assembled available arrays. It's much nicer when it just works!

Revision history for this message
Nicolas Van Eenaeme (nicolas-netlog) wrote :

Hi all,

Surbhi's packages also fixed my problem! Great work and thanks a lot!

Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :
Revision history for this message
Surbhi Palande (csurbhi) wrote :

1) The ppa requested for testing consists of these patches. They are needed for the proper working of mdadm auto assembly.
2) Also for the mdadm autoassembly to work properly, the following needs sponsorship for maverick and lucid:
https://code.launchpad.net/~csurbhi/+junk/initramfs.mdadm.fixes.

Please do consider merging these patches for maverick and lucid. Thanks!

tags: added: patch
Revision history for this message
Christian Hudon (chrish) wrote :

Any reason why the following patches haven't been folded into Ubuntu 10.04 LTS? I had a similar problem (create md0 RAID1 array with one device missing, after reboot it comes up as md_d0 instead) and these patches (installed from the two packages in the PPA) fix that. It would have saved me a couple of hours of work.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.