[Intrepid] dmraid 5 error: "raid4-5" not in kernel

Bug #287751 reported by Jan Friedrich
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
dmraid (Ubuntu)
Fix Released
High
Luke Yelavich
Intrepid
Fix Released
High
Luke Yelavich

Bug Description

Binary package hint: dmraid

After I upgraded my Hardy to Intrepid I couldn't access my fakeraid-partitions (where my Windows is installed) anymore. My onboard-controller is an Intel ICH9.

When typing "dmraid -r" i get the following output:
/dev/sdd: isw, "isw_djffjdjgdi", GROUP, ok, 488397165 sectors, data@ 0
/dev/sdc: isw, "isw_djffjdjgdi", GROUP, ok, 488397165 sectors, data@ 0
/dev/sdb: isw, "isw_djffjdjgdi", GROUP, ok, 488397165 sectors, data@ 0

So dmraid still seems to recognize my Raid. But with "dmraid -ay" i get:
ERROR: device-mapper target type "raid4-5" not in kernel

I found the dm-raid4-5 module in /lib/modules/2.6.27-7-generic/kernel/ubuntu and tried to "modprobe dm-raid4-5", which seems to load (lsmod lists dm_raid4_5), but I still get the "raid4-5 not in kernel"-message.

I'm using version 1.0.0.rc14 of dmraid.

Related branches

Revision history for this message
Phillip Susi (psusi) wrote :

Could you be more specific about which version of the dmraid package you are using? The current one in Intrepid ( 1.0.0.rc14-2ubuntu11 ) requests "raid45" which is the correct name, not "raid4-5".

Changed in dmraid:
assignee: nobody → psusi
status: New → Incomplete
Revision history for this message
Jan Friedrich (brooklyn-ffmradio) wrote :

I'm using dmraid version 1.0.0.rc14-2ubuntu11, the only version which is available for me in the package management.
Today in the morning it seemed to work for a moment: I found my Volume0 in /dev/mapper, but not the partitions (Volume01,02,05). Sadly, after a reboot I'm back to: "ERROR: device-mapper target type "raid4-5" not in kernel"

Maybe my dmesg output gives you a hint:
[ 1295.729404] device-mapper: dm-raid45: initialized v0.2427
[ 1295.729503] device-mapper: table: 254:1: raid4-5: unknown target type
[ 1295.729506] device-mapper: ioctl: error adding target to table
[ 1295.732803] attempt to access beyond end of device
[ 1295.732810] sdb: rw=0, want=932733760, limit=488397168
[ 1295.732811] __ratelimit: 3 callbacks suppressed
[ 1295.732813] Buffer I/O error on device sdb2, logical block 827877504
[ 1295.732816] attempt to access beyond end of device
[ 1295.732818] sdb: rw=0, want=932733761, limit=488397168
[ 1295.732819] Buffer I/O error on device sdb2, logical block 827877505
[ 1295.732821] attempt to access beyond end of device
[ 1295.732823] sdb: rw=0, want=932733762, limit=488397168
[...]
[ 1295.732847] sdb: rw=0, want=932733767, limit=488397168
[ 1295.732849] Buffer I/O error on device sdb2, logical block 827877511
[ 1295.732852] attempt to access beyond end of device
[ 1295.732853] sdb: rw=0, want=932733760, limit=488397168
[ 1295.732855] Buffer I/O error on device sdb2, logical block 827877504
[ 1295.732857] attempt to access beyond end of device
[ 1295.732858] sdb: rw=0, want=932733761, limit=488397168
[ 1295.732860] Buffer I/O error on device sdb2, logical block 827877505
[ 1295.732862] attempt to access beyond end of device
[ 1295.732863] sdb: rw=0, want=932733762, limit=488397168
[ 1295.732865] attempt to access beyond end of device
[ 1295.732866] sdb: rw=0, want=932733763, limit=488397168
[ 1295.732868] attempt to access beyond end of device
[ 1295.732869] sdb: rw=0, want=932733764, limit=488397168
[ 1295.732871] attempt to access beyond end of device
[... about 70 times the same line]
[ 1295.733216] sdb: rw=0, want=932733899, limit=488397168
[ 1295.733218] attempt to access beyond end of device
[ 1295.733219] sdb: rw=0, want=932733900, limit=488397168
[ 1316.477536] device-mapper: table: 254:0: raid4-5: unknown target type
[ 1316.477542] device-mapper: ioctl: error adding target to table

Revision history for this message
Phillip Susi (psusi) wrote :

My bad, there is a dpatch that changes it to 4-5, which is wrong and needs disabled. Can you disable 02_raid45_to_raid4-5.dpatch please luke, hopefully in time for the final release?

Changed in dmraid:
assignee: psusi → themuso
status: Incomplete → In Progress
Colin Watson (cjwatson)
Changed in dmraid:
importance: Undecided → High
milestone: none → ubuntu-8.10
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package dmraid - 1.0.0.rc14-2ubuntu12

---------------
dmraid (1.0.0.rc14-2ubuntu12) intrepid; urgency=low

  * debian/patches/02_raid45_to_raid4-5.dpatch: Remove, seems the kernel
    module is using the target that the dmraid code originally expected.
    (LP: #287751)

 -- Luke Yelavich <email address hidden> Fri, 24 Oct 2008 10:42:49 +1100

Changed in dmraid:
status: In Progress → Fix Released
Revision history for this message
Jan Friedrich (brooklyn-ffmradio) wrote :

Version 1.0.0.rc14-2ubuntu12 works for me - great job! Thanks! :)

Revision history for this message
Aleksander Jerič (ninjattt) wrote :

I experiencing a same issue with RAID0 on nVidia 790i ultra with FakeRAID. (Intrepid 64bit)

Could this patch solve this on RAID0?

some output...
[ 6.482497] scsi 2:0:0:0: Attached scsi generic sg0 type 0
[ 6.485485] scsi 3:0:0:0: Attached scsi generic sg1 type 0
[ 6.497913] Driver 'sd' needs updating - please use bus_type methods
[ 6.501236] sd 2:0:0:0: [sda] 976773168 512-byte hardware sectors (500108 MB)
[ 6.504147] sd 2:0:0:0: [sda] Write Protect is off
[ 6.507001] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
[ 6.507022] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 6.510035] sd 2:0:0:0: [sda] 976773168 512-byte hardware sectors (500108 MB)
[ 6.512969] sd 2:0:0:0: [sda] Write Protect is off
[ 6.515873] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
[ 6.515893] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 6.518894] sda: sda1 sda2 < >
[ 6.544341] sda: p1 exceeds device capacity
[ 6.544363] sda: p2 exceeds device capacity
[ 6.547313] sd 2:0:0:0: [sda] Attached SCSI disk
[ 6.547384] sd 3:0:0:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
[ 6.547398] sd 3:0:0:0: [sdb] Write Protect is off
[ 6.547399] sd 3:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[ 6.547421] sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 6.547465] sd 3:0:0:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
[ 6.547477] sd 3:0:0:0: [sdb] Write Protect is off
[ 6.547478] sd 3:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[ 6.547499] sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 6.547501] sdb: unknown partition table
[ 6.570431] sd 3:0:0:0: [sdb] Attached SCSI disk
[ 7.052227] ieee1394: Host added: ID:BUS[0-00:1023] GUID[56a2273d00044b18]
[ 7.089613] attempt to access beyond end of device
[ 7.090561] attempt to access beyond end of device
[ 7.090563] sda: rw=0, want=1929791808, limit=976773168
[ 7.090565] Buffer I/O error on device sda1, logical block 1929791744
[ 7.090567] attempt to access beyond end of device
[ 7.090568] sda: rw=0, want=1929791809, limit=976773168
[ 7.090570] Buffer I/O error on device sda1, logical block 1929791745
[ 7.090571] attempt to access beyond end of device
[ 7.090572] sda: rw=0, want=1929791810, limit=976773168
[ 7.090573] Buffer I/O error on device sda1, logical block 1929791746
[ 7.090575] attempt to access beyond end of device
[ 7.090576] sda: rw=0, want=1929791811, limit=976773168

Revision history for this message
Aleksander Jerič (ninjattt) wrote :

OK..I have the latest dmraid package (1.0.0.rc14-2ubuntu12 - fresh install few days ago)

alex@quad:~$ sudo dmraid -r
/dev/sdb: nvidia, "nvidia_gfedehcd", stripe, ok, 976773166 sectors, data@ 0
/dev/sda: nvidia, "nvidia_gfedehcd", stripe, ok, 976773166 sectors, data@ 0
alex@quad:~$ sudo dmraid -ay
RAID set "nvidia_gfedehcd" already active
RAID set "nvidia_gfedehcd1" already active
RAID set "nvidia_gfedehcd5" already active

But I still get those errors...

Revision history for this message
Giuseppe Iuculano (giuseppe-iuculano) wrote :

Those errors didn't seem to be related with dmraid...

Giuseppe.

Revision history for this message
Aleksander Jerič (ninjattt) wrote :

Well RAID0 array with dmraid worked fine.
Capacity of both hard drives were 1TB (2x 500GB) and ubuntu was loading well.
It seems that kernel or dmraid (I do not know, maybe grub?..)
reported back partition table from first regular SATA device which
referenced to second SATA device partition table instead of using
"common" partition table from both devices as RAID0.
Erros were shown only at boot time. I just wanted to point it out.

I've deleted RAID array now and use hard drives separately.
I've used this howto https://help.ubuntu.com/community/FakeRaidHowto
Regards, Alex

Revision history for this message
quequotion (quequotion) wrote :

I have the same issue as Alex.

I am not sure anything is actually wrong.

I get these messages at boot, but I don't notice any problems with my system.

I suppose the ultimate test would be to fill the drive to the end once with a single large file and then again with multiple small files and see if everything gets accounted for.. but my raid is 2TB, ubuntu is installed on it directly, and I'm just not that enthusiastic.

Rather than a fix for the "problem" I wonder what particular process/module outputs these messages and if there's any means to suppress them.

Any ideas?

Revision history for this message
quequotion (quequotion) wrote :

Some relevant information from my syslog:

#kernel loads 'sd' driver and detects (correctly) one of the four 500GB drives of which my 2TB RAID:0 consists.
Oct 7 19:20:43 quequotion kernel: [ 7.109183] sd 0:0:0:0: [sda] 976773168 512-byte hardware sectors (500108 MB)
[...]
#kernel remarks that the partitions specified by the first disk are much larger than the first disk, which I figure for true, assuming it means the raid partitions (approximately 1.9tb root and 1gb swap, both across 4 disks).
Oct 7 19:20:43 quequotion kernel: [ 7.132039] sda: p1 exceeds device capacity
Oct 7 19:20:43 quequotion kernel: [ 7.132059] sda: p2 exceeds device capacity
[...]
#Nonetheless boot continues (I hate to think the linux kernel is capable of being more confused by it's operation than I am but you have to admire how it picks up and carries on even over several misgivings, objections and outright nonsense.) and the rest of the four drives are loaded with this message at the end of each one (?=b, c, and d)-
Oct 7 19:20:43 quequotion kernel: [ 7.158356] sd?: unknown partition table
[...]
#Some time later, amidst the loading of several other unrelated drivers-
Oct 7 04:10:28 quequotion kernel: [ 7.512474] attempt to access beyond end of device
Oct 7 04:10:28 quequotion kernel: [ 7.512478] sda: rw=0, want=3904983616, limit=976773168

Sometime I'll have to clear the syslog and reboot and see what range it's failing to get. The limit is set to the correct physical size of the physical drive, but the target must be somewhere within the logical size of the RAID.

Further questions:
1. Why is the kernel attempting to directly access any data on /dev/sda other than the partition table? Once it's loaded, all other data should be loaded from /dev/mappper/nvida_somerandomalphanumerics1... I think....

2. Does this look like a problem with the 'sd' driver, the 'dmraid' driver, or the kernel itself?

Another issue:

This came up in syslog as well:
Oct 7 04:10:28 quequotion kernel: [ 7.097556] Driver 'sd' needs updating - please use bus_type methods

I'm looking into that now, but I'll consider it related until proven otherwise (there can be no justice for computers).

Revision history for this message
Danny Wood (danwood76) wrote :

These bugs are addressed in jaunty (no more messages or raw devices) so upgrade if the messages really bother you.
I dont believe any harm is caused by the kernel trying to access the raw devices but it is possible I suppose.

This bug has been fixed and is closed.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.