Starting clustered lvm vg pool fails with status 5

Bug #1075950 reported by Tais P. Hansen
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
libvirt (Ubuntu)
Confirmed
Medium
Unassigned
lvm2 (Ubuntu)
New
High
Unassigned

Bug Description

Starting or autostarting a clustered lvm2 volumegroup fails with the error below. Non-clustered vgs works as expected.

# virsh pool-start vg2
error: Failed to start pool vg2
error: internal error Child process (/sbin/vgchange -ay vg2) status unexpected: exit status 5

# vgs vg2
  VG #PV #LV #SN Attr VSize VFree
  vg2 1 10 0 wz--nc 2.00t 1.76t

# /sbin/vgchange -ay vg2
  activation/monitoring=0 is incompatible with clustered Volume Group "vg2": Skipping.
# echo $?
5

A possible fix is that libvirt calls vgchange with "--monitor y" param:
# /sbin/vgchange -ay --monitor y vg2
  10 logical volume(s) in volume group "vg2" now active

Additional information:

# lsb_release -rd
Description: Ubuntu 12.04.1 LTS
Release: 12.04

# apt-cache policy libvirt-bin
libvirt-bin:
  Installed: 0.9.8-2ubuntu17.4
  Candidate: 0.9.8-2ubuntu17.4

# apt-cache policy lvm2
lvm2:
  Installed: 2.02.66-4ubuntu7.1
  Candidate: 2.02.66-4ubuntu7.1

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Thanks for reporting this bug.

Changed in libvirt (Ubuntu):
importance: Undecided → Medium
status: New → Confirmed
Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Can you tell us exactly how you created the vg? Is it possible to work around this through lvm.conf settings? (I didn't offhand see a way to do so) If not, we'll need to work through upstream to add support for this to src/storage/storage_backend_logical.c.

Revision history for this message
Tais P. Hansen (taisph) wrote :

The vg was created as a normal vg. Ie.
vgcreate vg2 /dev/disk/by-id/dm-uuid-mpath-xxxx.

Then I installed and configured clustering (cman, clvm, fence-agents) and once cman was running I ran lvmconf --enable-cluster, vgchange -cy vg2 and finally service clvm start.

At this point /etc/init.d/clvm complains with pretty much the same error as here - I modified the script and added --monitor y to the vgchange entries and clvm does what it should. Changes were suggested in bug 833368.

I didn't find a way to modify libvirt with the same change.

Also, I've just noticed libvirt should probably use -aly for clustered volumes and not -ay as it is now.

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Thanks for that information.

As per discussion on irc, this is actually a bug in lvm2. You should be able to set monitoring to on in lvm.conf. I will mark this bug against lvm2.

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Sigh, I see, this (bug 833368) has been sitting awhile. In 10 days if needed I'll propose a patch to make 'monitoring = 1' work.

Changed in lvm2 (Ubuntu):
importance: Undecided → High
Revision history for this message
Dimitri John Ledkov (xnox) wrote :

Debian dropped clustered lvm2 support.

lvm2 (2.02.95-6) unstable; urgency=low

   * Drop cluster (clvm) support. It never properly worked and is more dead
     than alive.
 -- Bastian Blank <email address hidden> Wed, 02 Jan 2013 11:11:41 +0100

Do we still want it in Ubuntu?

Revision history for this message
Alasdair G. Kergon (agk2) wrote : Re: [Bug 1075950] Re: Starting clustered lvm vg pool fails with status 5

On Sat, Jan 12, 2013 at 12:03:16AM -0000, Dmitrijs Ledkovs wrote:
> Debian dropped clustered lvm2 support.
> * Drop cluster (clvm) support. It never properly worked and is more dead
> than alive.
> -- Bastian Blank <email address hidden> Wed, 02 Jan 2013 11:11:41 +0100

It it certainly not dead upstream and does work properly. It remains
supported in Fedora/Red Hat Enterprise Linux/CentOS and presumably in
many other distributions too.

Alasdair

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.