pvcreate causing problems for dmraid

Bug #183930 reported by jhansonxi
2
Affects Status Importance Assigned to Milestone
lvm2 (Ubuntu)
Expired
Undecided
Unassigned

Bug Description

Binary package hint: dmraid

I'm encountering a problem where LVM2 pvcreate appears to mess up the superblock of a RAID1 pair which results in the initramfs system not being able to mount it.

Hardware:
Intel D865GVHZ
(2) WD 200GiB PATA (sda/sdb)
(2) WD 400GiB SATA (sdc/sdd)

Attempted configuration:
sdc1+sdd1>md0>ext3 (/boot)
sdc2+sdd2>md1>dm-crypt (swap_crypt)>swap
sdc3+sdd3>md2>lvm (vg0)
sda+sdb>md3>lvm (vg0)
vg0>vg0-lv0>dm-crypt(vg0-lv0_crypt)>ext3 (/)
vg0>vg0-lv1>dm-crypt(vg0-lv1_crypt)>ext3 (/home)
vg0>vg0-lv2>dm-crypt(vg0-lv2_crypt)>ext3 (/home/jhansonxi) (pam_mount)
vg0>vg0-lv3>dm-crypt(vg0-lv3_crypt)>ext3 (/var/lib/backuppc)
vg0>vg0-lv4>dm-crypt(vg0-lv4_crypt)>ext3 (/local/private/media)
vg0>vg0-lv5>ext3 (/local/public/archive)
vg0>vg0-lv6>ext3 (/local/public/linux)

In this setup only RAID1 arrays are used and all crypt volumes except vg0-lv0_crypt use key files for passphrases. The Gutsy (and Hardy Alpha 2) i386 Alternate installer made a complete mess of this setup (bug #180269) so a ridiculous amount of manual workarounds were needed to even attempt it. One issue was that Grub ended up on sda which caused all kinds of spurious problems and warnings about invalid superblocks, bd_claim failures, etc., so I wiped sda and sdb recreated the array manually. I verified md3 is accessible from the LiveCD and from the installed system. The sda and sdb superblocks look normal from what I can tell. But once I do a pvcreate on it fdisk reports it is messed up and a dd shows it as zeroed, completely different from sdc/sdd. Yet mdadm seems to think it's normal and lvm doesn't complain either. But if I add md3 to vg0 the system will not boot and it appears that it doesn't get mounted in initramfs so vg0, vg0-lv0, vg0-lv0_crypt, and / are missing. I can start it manually from initramfs. I attempted to work around the superblock issue by creating two partitons on sd[ab] and just making sd[ab]2 a raid but it didn't solve the problem. I wasn't sure if the zeroed superblock was normal so I asked about it but didn't get any answers:
https://answers.launchpad.net/ubuntu/+question/22332

Revision history for this message
jhansonxi (jhansonxi) wrote :
Revision history for this message
jhansonxi (jhansonxi) wrote :

After some more attempts, I finally figured out how to get the md3 raid set up the same way as partman does with md[0-2] on the Alternate installer CD. There have been some constant issues that I forgot to mention that may be a factor. Whenever I attempted to repartition sda and set the partition type to FD, as soon as I exited cfdisk the dmraid system would mount md3 forcing me to mdadm -S /dev/md3 so I could repartition sdb. In addition, /dev/sda1 wouldn't show up while /dev/sdb1 did but fdisk -l would show it existed on sda. mdadm didn't see it so I'm not sure if fdisk was getting it out of /proc/partinfo or just assuming it existed. Without it I couldn't create md3 on sd[ab]1, only sd[ab]. Thinking that was part of the problem I tried various ways of getting sda1 to show up including partprobe, various dd zeroings, and repartitioning. I noticed while doing this that cfdisk doesn't set the serial number in the superblock so I used mklabel in parted to set it on both drives first. Finally I repartitioned sdb as ntfs, zeroed the fist 0.5MiB or so of sda, removed the md3 entry from mdadm.conf, and rebooted. I then repartitioned sda as type FD and then sda1 showed up. I then created md3 on sda1 without a second device and rebooted again to make sure it was persistent. I then repartitioned sdb and added sdb1 as a hot-spare and let it resync. After some more reboots and tests, I ran pvcreate on md3 and checked the superblocks - they remained intact this time. I rebooted and double-checked everything then added md3 to vg0 and rebooted. It failed and I ended up in initramfs busybox. I removed it from vg0 and was able to boot again. So there definitely is a problem with booting from a vg with multiple pv devices, at least if they are md. My workaround was to create a new volume group, vg1, that only had md3 in it. I lose some lvm flexibility this way but at least it boots usually. By "usually" I mean that there is an additional error that now pops up every few boots:

device-mapper: table: 254:0: linear: dm-linear: Device lookup failed
device-mapper: ioctl: error adding target to table
Command failed: Not a block device
cryptsetup: cryptsetup failed, bad password or options?

I never get a LUKS passphrase prompt. I can't unlock the volume from initramfs either. All I can do is reboot and try again.

I also noticed a suspicious error on the boot attempt immediately after removing md3 from vg0:
Starting kernel event manager... [ OK ]
 * Loading hardware drivers...
error receiving uevent message: No buffer space available

I found some references to an old kernel bug but this message only showed up once so it may be unrelated.

Revision history for this message
jhansonxi (jhansonxi) wrote :
Revision history for this message
jhansonxi (jhansonxi) wrote :
Revision history for this message
jhansonxi (jhansonxi) wrote :
Revision history for this message
jhansonxi (jhansonxi) wrote :
Revision history for this message
jhansonxi (jhansonxi) wrote :

Note: sde/sde1 is a USB flash key I had plugged in.

Revision history for this message
jhansonxi (jhansonxi) wrote :

Forgot to include basic kernel debug info.

uname -a:
Linux jhansonxi-s1 2.6.22-14-generic #1 SMP Tue Dec 18 08:02:57 UTC 2007 i686 GNU/Linux

Revision history for this message
jhansonxi (jhansonxi) wrote :

Other app versions used:
cryptsetup 1.0.5
mdadm - v2.6.2 - 21st May 2007
cfdisk (util-linux-ng 2.13)
fdisk (util-linux-ng 2.13)
GNU Parted 1.7.1
LVM version: 2.02.26 (2007-06-15)
  Library version: 1.02.20 (2007-06-15)
  Driver version: 4.11.0

Revision history for this message
Phillip Susi (psusi) wrote :

This is completely unrelated to dmraid.

Changed in dmraid:
status: New → Invalid
Phillip Susi (psusi)
Changed in dmraid:
status: Invalid → New
Revision history for this message
Andreas Noteng (andreas-noteng) wrote :

Thank you for taking the time to report this bug and helping to make Ubuntu better. You reported this bug a while ago and there hasn't been any activity in it recently. We were wondering if this is still an issue for you. Can you try with the latest Ubuntu release? Thanks in advance.

Changed in lvm2 (Ubuntu):
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for lvm2 (Ubuntu) because there has been no activity for 60 days.]

Changed in lvm2 (Ubuntu):
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.