grub2 shows 'biosdisk read error', then boots

Bug #396564 reported by Michael Kofler
166
This bug affects 30 people
Affects Status Importance Assigned to Milestone
grub
Unknown
Unknown
grub2 (Ubuntu)
Invalid
Undecided
Unassigned
Nominated for Karmic by ChristianSmith
Nominated for Lucid by ChristianSmith

Bug Description

Binary package hint: grub-pc

I installed karmic alpha 64 bit daily from July 7th with a simple software raid-1 configuration (no swap, no separate boot partition)

after the system was up and running (with GRUB 0.97 legacy), I installed grub-pc

everything works fine, I can boot, but GRUB shows a 'biosdisk read error' for about one second, which is kind of irritating

Revision history for this message
Jie Zhang (jzhang918) wrote :

I saw the same on one of my Debian machine. I have md devies on that machine. Maybe it's related.

Revision history for this message
mtx (michaelthomas) wrote :

I have the same problem, too. Note that the grub menu is not offered. Rather the preselected menu-entry is booted directly.

My partition tables:

/dev/sda1 1 60770 488134993+ fd Linux RAID autodetect
/dev/sda2 60771 60801 249007+ 5 Erweiterte
/dev/sda5 60771 60801 248976 fd Linux RAID autodetect

/dev/sdb1 1 60770 488134993+ fd Linux RAID autodetect
/dev/sdb2 60771 60801 249007+ 5 Erweiterte
/dev/sdb5 60771 60801 248976 fd Linux RAID autodetect

/dev/sd{a,b}1 holds an LVM, /dev/sd{a,b}5 is /boot.

Grub installation cmd:

$ sudo grub-install /dev/md1
Installation finished. No error reported.
This is the contents of the device map /boot/grub/device.map.
Check if this is correct or not. If any of the lines is incorrect,
fix it and re-run the script `grub-install'.

(hd0) /dev/sda
(hd1) /dev/sdb

Revision history for this message
Steven Harms (sharms) wrote :

Thank you for taking the time to report this bug. Can you try this out on Karmic Alpha 6 and see if it is still an issue?

Changed in grub2 (Ubuntu):
status: New → Incomplete
Revision history for this message
Jie Zhang (jzhang918) wrote :

Can I install debs from Karmic Alpha 6 on Debian unstable?

Revision history for this message
ehinmers (hinmerse) wrote :

I had this issue on the last alpha, and just burned a CD with the newest beta released today and can confirm that this issue is still present in a fresh install of the "Oct 1 beta release". I also have no grub menu as a previous poster mentioned, and also find that it automatically boots the default entry.

My partition tables:

    Device Boot Start End Blocks ID System
/dev/sda1 * 1 30394 244139773+ fd Linux raid autodetect
/dev/sdb1 * 1 30394 244139773+ fd Linux raid autodetect

/dev/md0 is setup as raid 1 usng /dev/sd{a,b}1
LVM is setup as vol0 on /dev/md0 with /dev/vol0/root as '/' and /dev/vol0/swap as 'swap'

Revision history for this message
Mark Garland (p-launchpad-markgarland-co-uk) wrote :

I can confirm that this is present in the beta.
I have a Raid 0 setup, and installed the AMD64 beta.
Thanks,

Revision history for this message
rduplain (ron.duplain) wrote :

I also found this issue in Karmic beta. I have (software) RAID1 set up with AMD64. Fortunately grub boots into default selection, so the system still works overall.

Revision history for this message
Marco Giancotti (ma.giancotti) wrote :

I can confirm this on Ubuntu 9.10 beta. I have a RAID1 array with no separate /boot partition. At boot, the message "Grub loading." comes up and stays more than 10 seconds, then for half a second "biosdisk read error" is displayed and finally I can reach the menu, which works normally for both Ubuntu and Windows (which is on another, non-raid disk).

Revision history for this message
Philip Armstrong (phil-ubuntu) wrote :

fwiw, I'm seeing the same error on current Debian unstable. RAID1 mirror /dev/md0, on top of which is an LVM physical volume containing /, /home and one or two other logical volumes.

I get the grub2 loading grub primpt on boot, then nothing for 10 seconds or so, then "biosdisk read error" followed by another pause, then "entering rescue mode" (or something like that) followed by the ordinary grub menu.

Revision history for this message
Hugh Saunders (hughsaunders) wrote :

I'm using the Karmic Beta, I have legacy grub chain loading grub2.

On boot I get the legacy grub prompt, I select chain load grub2, then I get "biosdisk read error" then the system boots without showing the grub2 prompt.

I am using a separate /boot and raid1+lvm for other partitions.

Revision history for this message
Shane O'Connell (shaneoc) wrote :

I installed using karmic alpha 6, upgraded now to karmic beta, I'm seeing this as well.

I have /boot on /dev/md0 as raid1, and the rest of the partitions are on lvm using /dev/md1 which is raid10.

I see "Loading GRUB." for a few seconds, then I see something like "error: biosdisk read error" for another second and then ubuntu starts booting normally.

Revision history for this message
Maximilian Mill (maximilian-mill) wrote :

i did install karmic alpha 4(or 5) with /dev/md0 as raid1 for /boot and /dev/md1 as raid5. At the current beta the bug is still there.

I have exactly the same error: "error: biosdisk read error" After some seconds the boot goes on.

Revision history for this message
lokað (lokad) wrote :

Same here with Karmic beta i386
boot on raid1 (md0)
lvm on raid0 (md1) containing root, home, swap

Several seconds "Grub loading.", then "error: biosdisk read error". A menu is displayed though.

Revision history for this message
Stefan Divjak (stefan-divjak) wrote : apport-collect data

Architecture: amd64
CheckboxSubmission: 92cee80d8613499a88b64f1b95b82815
CheckboxSystem: 292fc99fa0d2d154d3240e8e0e7866bc
DistroRelease: Ubuntu 9.10
Package: grub2 1.97~beta3-1ubuntu8
PackageArchitecture: amd64
ProcEnviron:
 SHELL=/bin/bash
 LANG=de_AT.UTF-8
ProcVersionSignature: Ubuntu 2.6.31-13.43-generic
Uname: Linux 2.6.31-13-generic x86_64
UserGroups: adm admin cdrom cyberjack dialout lpadmin plugdev sambashare users vboxusers

Revision history for this message
Stefan Divjak (stefan-divjak) wrote : Dependencies.txt
Revision history for this message
Stefan Divjak (stefan-divjak) wrote : XsessionErrors.txt
Changed in grub2 (Ubuntu):
status: Incomplete → New
tags: added: apport-collected
Revision history for this message
Stefan Divjak (stefan-divjak) wrote : Re: karmic alpha: grub2 shows 'biosdisk read error', then boots

I can confirm this with the most recent version of grub2. Having three RAID 1 partitions (on sda and sdb), I briefly get a "biosdisk read error" for about half a second, then everything continues without problems.

Changed in grub2 (Ubuntu):
status: New → Confirmed
summary: - karmic alpha: grub2 shows 'biosdisk read error', then boots
+ grub2 shows 'biosdisk read error', then boots
Revision history for this message
Paul McEnery (pmcenery) wrote :
Download full text (3.5 KiB)

I can also confirm this behaviour. I updated from Jaunty to Karmic (I assume beta at this point), and I have the following configuration:

$ sudo fdisk -l
===============================================================================
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

   Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux RAID autodetect
/dev/sda2 14 121601 976655610 fd Linux RAID autodetect

===============================================================================

$ sudo mdadm --detail /dev/md0
===============================================================================
/dev/md0:
        Version : 00.90
  Creation Time : Mon Apr 23 00:17:47 2007
     Raid Level : raid1
     Array Size : 104320 (101.89 MiB 106.82 MB)
  Used Dev Size : 104320 (101.89 MiB 106.82 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Oct 22 08:12:49 2009
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 74ad3d60:5074597c:0324c307:5941c2e9
         Events : 0.7296

    Number Major Minor RaidDevice State
       0 0 0 0 removed
       1 8 1 1 active sync /dev/sda1
===============================================================================

$ sudo mdadm --detail /dev/md1
===============================================================================
/dev/md1:
        Version : 00.90
  Creation Time : Mon Apr 23 00:18:02 2007
     Raid Level : raid1
     Array Size : 976655488 (931.41 GiB 1000.10 GB)
  Used Dev Size : 976655488 (931.41 GiB 1000.10 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Thu Oct 22 08:32:24 2009
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 58a33554:66ab860b:f095819a:ef47ab1e
         Events : 0.108323780

    Number Major Minor RaidDevice State
       0 0 0 0 removed
       1 8 2 1 active sync /dev/sda2
===============================================================================

$ sudo pvs
===============================================================================
  PV VG Fmt Attr PSize PFree
  /dev/md1 rootvg lvm2 a- 931.41G 772.00M
===============================================================================

$ sudo vgs
===============================================================================
  VG #PV #LV #SN Attr VSize VFree
  rootvg 1 4 0 wz--n- 931.41G 772.00M
===============================================================================

$ sudo lvs
===============================================================================
  LV VG Attr LSize Origin Snap% Move Log Copy% Convert
  mythtv rootvg -wi-ao...

Read more...

Revision history for this message
Haute Subzero (sub0hero) wrote :

I can confirm this on my jaunty to karmic upgrade as well. I did a network upgrade of a clean jaunty base (ssh only) installation. I'm getting the same error booting from a valid mirror set via the chainload option recommended here:

http://www.ubuntu-inside.me/2009/06/howto-upgrade-to-grub2-on-ubuntu-jaunty.html

I'm also seeing a ", bss=0x0" entry being output before the Starting... message. Its also odd but fdisk is complaining that neither device has a valid partition table. Related? Not sure how you get to that state or if it's related to me using Rieser on that system, but things boot OK.

root@alphamail:~# fdisk -l

Disk /dev/sda: 40.0 GB, 40020664320 bytes
255 heads, 63 sectors/track, 4865 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000bae9f

   Device Boot Start End Blocks Id System
/dev/sda1 * 1 4741 38082051 fd Linux raid autodetect
/dev/sda2 4742 4865 996030 fd Linux raid autodetect

Disk /dev/sdb: 40.0 GB, 40020664320 bytes
255 heads, 63 sectors/track, 4865 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000954d1

   Device Boot Start End Blocks Id System
/dev/sdb1 * 1 4741 38082051 fd Linux raid autodetect
/dev/sdb2 4742 4865 996030 fd Linux raid autodetect

Disk /dev/md1: 1019 MB, 1019805696 bytes
2 heads, 4 sectors/track, 248976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md0: 39.0 GB, 38995951616 bytes
2 heads, 4 sectors/track, 9520496 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Revision history for this message
Al Grabauskas (agrabauskas) wrote :
Download full text (3.7 KiB)

i can confirm this on a clean install from karmic stable. it was also present in an upgrade from jaunty on the same machine.

my situation is two partitions per drive (4 drives) - the first partition(s) are /boot and are raid1, and the other partition(s) are a raid5 lvm volume group.

it's not a problem - the system boots just fine after a delay waiting for that message.

some data on the disk layout - /dev/md0 is the /boot raid1:

------------------------------
### BEGIN /etc/grub.d/10_linux ###
menuentry "Ubuntu, Linux 2.6.31-14-generic" {
        recordfail=1
        if [ -n ${have_grubenv} ]; then save_env recordfail; fi
        set quiet=1
        insmod raid
        insmod mdraid
        insmod ext2
        set root=(md0)
        search --no-floppy --fs-uuid --set 41a623ac-fd6d-428d-abc9-472bd6caf9dd
        linux /vmlinuz-2.6.31-14-generic root=/dev/mapper/vger2-sys ro quiet splash
        initrd /initrd.img-2.6.31-14-generic
}
------------------------------

root@vger:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1] [linear] [multipath] [raid0] [raid10]
md1 : active raid5 sdd2[3] sdc2[1] sda2[2] sdb2[0]
      1464260160 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sdb1[0] sdd1[3] sdc1[1] sda1[2]
      297088 blocks [4/4] [UUUU]

unused devices: <none>
root@vger:~# fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xda56cdd4

   Device Boot Start End Blocks Id System
/dev/sda1 * 1 37 297171 fd Linux raid autodetect
/dev/sda2 38 60801 488086830 fd Linux raid autodetect

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000bad08

   Device Boot Start End Blocks Id System
/dev/sdb1 * 1 37 297171 fd Linux raid autodetect
/dev/sdb2 38 60801 488086830 fd Linux raid autodetect

Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xa5357bed

   Device Boot Start End Blocks Id System
/dev/sdc1 * 1 37 297171 fd Linux raid autodetect
/dev/sdc2 38 60801 488086830 fd Linux raid autodetect

Disk /dev/sdd: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

   Device Boot Start End Blocks Id System
/dev/s...

Read more...

Revision history for this message
kanub (gwd0fqy02) wrote :

same issue here. "grub loading." stays for a while, then i get "error: biosdisk read error".
Karmic Koala stable x64 alternative installation. mirror RAID for /boot and an encrypted mirror RAID for / (root)

> sudo fdisk -l

Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0002d4ad

   Device Boot Start End Blocks Id System
/dev/sda1 * 1 134 1076323+ fd Linux raid autodetect
/dev/sda2 135 30401 243119677+ fd Linux raid autodetect

Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00025c92

   Device Boot Start End Blocks Id System
/dev/sdb1 * 1 134 1076323+ fd Linux raid autodetect
/dev/sdb2 135 30401 243119677+ fd Linux raid autodetect

Disk /dev/md0: 1102 MB, 1102053376 bytes
2 heads, 4 sectors/track, 269056 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 249.0 GB, 248954421248 bytes
2 heads, 4 sectors/track, 60779888 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x08040000

Disk /dev/md1 doesn't contain a valid partition table

> sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90
  Creation Time : Fri Oct 30 15:40:33 2009
     Raid Level : raid1
     Array Size : 1076224 (1051.18 MiB 1102.05 MB)
  Used Dev Size : 1076224 (1051.18 MiB 1102.05 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue Nov 10 10:56:31 2009
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 47c9a647:473c1802:9515e672:ee318adb
         Events : 0.229

    Number Major Minor RaidDevice State
       0 8 1 0 active sync /dev/sda1
       1 8 17 1 active sync /dev/sdb1

> sudo mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90
  Creation Time : Fri Oct 30 15:40:47 2009
     Raid Level : raid1
     Array Size : 243119552 (231.86 GiB 248.95 GB)
  Used Dev Size : 243119552 (231.86 GiB 248.95 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Tue Nov 10 11:10:50 2009
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : 026cf484:89dec43d:3b5e59a4:166e8441
         Events : 0.169383

    Number Major Minor RaidDevice State
       0 8 2 0 active sync /dev/sda2
       1 8 18 1 active sync /dev/sdb2

> cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda2[0] sdb2[1]
      243119552 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      1076224 blocks [2/2] [UU]

unused devices: <none>

Revision history for this message
Waldgeist_dI (noway) wrote :
Download full text (4.7 KiB)

I experience the same thing.

Two 1 TB drives are set up with two partitions each: one for /boot, one for LVM. Inside the logic volumegroups are separate logic volumes for /, /var, /tmp and swap. Both partitions are mirrored with software RAID (level 1).

The other eight 1 TB drives are hooked up to a Dell PERC6/i RAID controller in a hardware array (level 6) with one hotspare.

Upon booting the boot loader displays "GRUB loading" for some seconds, then (twice):

error: biosdisk read error
error: biosdisk read error.

Seconds later the systems continues to boot normally. Is there anything more I can investigate to clear up things? Log messages that might reveal something of importance?

Cheers

> uname -a

Linux ***Name*** 2.6.31-14-server #48-Ubuntu SMP Fri Oct 16 15:07:34 UTC 2009 x86_64 GNU/Linux

> cat /etc/lsb-release

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=9.10
DISTRIB_CODENAME=karmic
DISTRIB_DESCRIPTION="Ubuntu 9.10"

> sudo fdisk -l

Platte /dev/sda: 1000.2 GByte, 1000204886016 Byte
255 Köpfe, 63 Sektoren/Spuren, 121601 Zylinder
Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes
Disk identifier: 0x00078612

   Gerät boot. Anfang Ende Blöcke Id System
/dev/sda1 * 1 122 979933+ fd Linux raid autodetect
/dev/sda2 123 121601 975780067+ 5 Erweiterte
/dev/sda5 123 121601 975780036 fd Linux raid autodetect

Platte /dev/sdb: 1000.2 GByte, 1000204886016 Byte
255 Köpfe, 63 Sektoren/Spuren, 121601 Zylinder
Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes
Disk identifier: 0x42424242

   Gerät boot. Anfang Ende Blöcke Id System
/dev/sdb1 * 1 122 979933+ fd Linux raid autodetect
/dev/sdb2 123 121601 975780067+ 5 Erweiterte
/dev/sdb5 123 121601 975780036 fd Linux raid autodetect

Platte /dev/md1: 999.2 GByte, 999198687232 Byte
2 Köpfe, 4 Sektoren/Spuren, 243944992 Zylinder
Einheiten = Zylinder von 8 × 512 = 4096 Bytes
Disk identifier: 0x00000000

Festplatte /dev/md1 enthält keine gültige Partitionstabelle

Platte /dev/md0: 1003 MByte, 1003356160 Byte
2 Köpfe, 4 Sektoren/Spuren, 244960 Zylinder
Einheiten = Zylinder von 8 × 512 = 4096 Bytes
Disk identifier: 0x00000000

Festplatte /dev/md0 enthält keine gültige Partitionstabelle

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.

WARNING: The size of this disk is 5.0 TB (4998268190720 bytes).
DOS partition table format can not be used on drives for volumes
larger than (2199023255040 bytes) for 512-byte sectors. Use parted(1) and GUID
partition table format (GPT).

Platte /dev/sdc: 4998.3 GByte, 4998268190720 Byte
255 Köpfe, 63 Sektoren/Spuren, 607671 Zylinder
Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes
Disk identifier: 0x00000000

   Gerät boot. Anfang Ende Blöcke Id System
/dev/sdc1 1 267350 2147483647+ ee GPT

> sudo mdadm --detail /dev/md0

/dev/md0:
        Version : 00.90
  Creation Time : Fri Nov 20 09:03:34 2009
     Raid Level : raid1
     Array Size : 979840 (957.04 MiB 1003.36 MB)
  Used Dev Size : 979...

Read more...

Revision history for this message
Waldgeist_dI (noway) wrote :

I might add that the abovementioned configuration was freshly installed on a brandnew Dell PowerEdge T610, i.e. no update from an older release.

Koto (kkotowicz)
description: updated
Revision history for this message
Paradigm (paradox) wrote :

ubuntu karmic 9.10 official release same problem fdisk is same as mtx basically.

this still exists today a good say month after the release date? something should be done about this, a suprising number of people have soft raid and therefore a suprising number of people cant have ubuntu... fix boot problems first then worry about the other bugs, at least those other bugs happen while the system is loaded and running and such, i just get good old boot errors and a depressing black screen.

Revision history for this message
pepe123 (ferrer123) wrote :

I have the same problem

pepe@servidor:~$ uname -a
Linux servidor 2.6.31-15-server #50-Ubuntu SMP Tue Nov 10 15:50:36 UTC 2009 x86_64 GNU/Linux

/dev/sda1 1 30 240943+ fd Linux raid autodetect
/dev/sda2 31 1003 7815622+ fd Linux raid autodetect
/dev/sda3 1004 13161 97659135 fd Linux raid autodetect
/dev/sda4 13162 60801 382668300 fd Linux raid autodetect

Disco /dev/sdb: 500.1 GB, 500107862016 bytes
255 cabezas, 63 sectores/pista, 60801 cilindros
Unidades = cilindros de 16065 * 512 = 8225280 bytes
Identificador de disco: 0x000503ee

Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sdb1 * 1 30 240943+ fd Linux raid autodetect
/dev/sdb2 31 1003 7815622+ fd Linux raid autodetect
/dev/sdb3 1004 13161 97659135 fd Linux raid autodetect
/dev/sdb4 13162 60801 382668300 fd Linux raid autodetect

Revision history for this message
jim_charlton (charltn) wrote :
Download full text (7.2 KiB)

Same problem here. Where can I find out the current status of this problem. Is it a problem only on machines with the AMI BIOS. The error message appears to be coming from /grub-1.97~beta4/disk/i386/pc/biosdisk.c (grub2 source). Is there an incorrect BIOS response to one of the BIOS calls to the disk??? Can't this be fixed???

root@charlton7:~# uname -a
Linux charlton7 2.6.31-15-generic #50-Ubuntu SMP Tue Nov 10 14:53:52 UTC 2009 x86_64 GNU/Linux

root@charlton7:~# fdisk -l

Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0009a931

   Device Boot Start End Blocks Id System
/dev/sda1 * 1 31 248976 fd Linux raid autodetect
/dev/sda2 32 38913 312319665 5 Extended
/dev/sda5 32 38913 312319633+ 8e Linux LVM

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xf9ddacfa

   Device Boot Start End Blocks Id System
/dev/sdb1 * 1 31 248976 fd Linux raid autodetect
/dev/sdb2 32 60801 488135025 8e Linux LVM

Disk /dev/md0: 254 MB, 254869504 bytes
2 heads, 4 sectors/track, 62224 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

    Device Boot Start End Blocks Id System

root@charlton7:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90
  Creation Time : Sat Dec 8 10:32:56 2007
     Raid Level : raid1
     Array Size : 248896 (243.10 MiB 254.87 MB)
  Used Dev Size : 248896 (243.10 MiB 254.87 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Nov 30 07:11:58 2009
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : d19bea85:8a348030:345ec57d:580fadc3
         Events : 0.224

    Number Major Minor RaidDevice State
       0 8 1 0 active sync /dev/sda1
       1 8 17 1 active sync /dev/sdb1

root@charlton7:~# lshw (in part)
charlton7
    description: Desktop Computer
    product: System Product Name
    vendor: System manufacturer
    version: System Version
    serial: System Serial Number ...

Read more...

Revision history for this message
jim_charlton (charltn) wrote :

I took the plunge and compiled grub-1.97~beta4, following the intstructions in the INSTALL file (run ./autogen.sh; run ./configure; make; make install). The make install gave me an error. But it was something about installing man pages and updatedb followed by `locate bin/grub` showed me that new executables for grub had been moved to /usr/local/bin and /usr/local/sbin. Using `which <program-name> ` shows me that the grub executables in /usr/local will execute in preference to the old grub files at /usr/bin and /usr/sbin. So I ran update-grub, which in turns runs the new grub-mkconfig. I then did a `grub-install /dev/sda` and `grub-install /dev/sdb` (sda and sdb are the two disks in my raid array and I want to be able to boot from either disk). Then I rebooted. I now don't see the "biosdisk read error message". It says Grub Loading and then a line under it in reverse font saying Welcome to Grub. Then a message flashes on the screen so fast I cannot read it.... followed by the grub menu selections from which I get the default boot to my system. YMMV but this has made for a faster boot on my machine.

Revision history for this message
jim_charlton (charltn) wrote :

Actually... the time from the first appearance of the "Grub loading..." line until it starts to boot is still about 15 seconds. It seems faster but that is still pretty slow. Why is it so slow? Is this normal? Seems to me it was faster with Grub Legacy.

Revision history for this message
Michael Kofler (michael-kofler) wrote :

GRUB 2 is indeed extremely slow in some setups (including my two main machines, both having two disks and showing the GRUB 'bios disk error' message at startup). On both machines it takes between 10 and 25 seconds to get past the GRUB menu. If Ubuntu 10.04 wants to start in 10 seconds, I guess making GRUB 2 faster should be one main area of work ...

Btw, GRUB 2 is faster in simpler setups: My VirtualBox and KVM machines of Ubuntu 9.10 with only one small (virtual) disk and no LVM/RAID configuration show the GRUB menu almost instantly.

Revision history for this message
jim_charlton (charltn) wrote :

I tried installing the grub-pc_1.97+20091125-2_amd64.deb and grub-common_1.97+20091125-2_amd64.deb packages to see if that would help but the results are the same. It takes ca. 9 seconds after the Grub loading... message before the grub menu appears. And then there is a 5 second countdown to boot the default entry.

I agree that the delay in Grub2 has something to do with the raid setup. In Grub Legacy I booted hd0,0 (hd0,1 in Grub2 language) rather than doing insmod raid, insmod mdraid, insmod ext2 and then setting root to (md0) before booting. I know that it is more elegant to boot a raid system in a way that the remaining good disk boots if one fails. But I personally don't mind just resetting the BIOS to boot off of the remaining functional disk if one of them fails.

How can I convince Grub2 to point the MBR to only one of the disks in the raid (hd0,1 for example)? And would that possibly speed things up? Or am I out to lunch here (very possible). Does the code in the MBR actually load the modules and boot (md0)? OR does the MBR on the disk selected by the BIOS for boot, just jump to /boot of the same disk and then the modules get loaded etc etc. I have to admit to being somewhat confused on these points.

Revision history for this message
Stephen Cuka (smc003) wrote :

I receive the same "biosdisk read error" displayed briefly before my grub menu is displayed.

On my system, the error occurs right after grub does a seek to the floppy drive, after grub loads but before the menu is displayed. So thinking the seek and the error were related, I decided to test that theory.

I found that if I have a formatted floppy in the floppy drive, I don't get the error.

That leads me to believe that there's a problem with the processing of the search command in grub.cfg. My version of the command is:

search --no-floppy --fs-uuid --set 387a3584-1f7f-470e-bb9b-89346e2edf22

I couldn't find any documentation on the search command in the grub wiki, but if the purpose of the search command is to identify all the bios devices and raid devices that later grub.cfg commands use to boot OS's, shouldn't "--no-floppy" prevent grub from looking at the floppy drive at all?

The error is more of an annoyance than a problem as my system also continues on to boot from my raid1, but I'd be curious to know if others that are getting the error can work around by putting a floppy in the floppy drive...

My grub version is: GNU GRUB 1.97~beta4

Revision history for this message
Heinz Werner Kramski (kramski) wrote :

I can confirm Stephens findings: with a floppy in the floppy drive or with floppy disabled in the BIOS, grub-pc loads well.

Thanks for this workaround.

Regards
   Heinz

Revision history for this message
janny (janaum72) wrote :

I had the same problem in Debian testing, grub2, version 1.97~beta3-1, md and lvm configuration.
Building grub2 from bazaar repository didn't help.
The delay occurs in biosdisk.mod file somewhere in grub_biosdisk_rw while accessing disks, especially fdd.

Simple solution for me was to completely disable Floppy in bios.
(In my case I even didn't have floppy connected to controller, but enabled)

Now there is no delay when grub2 is starting.

Thanx Stephen

Janny

Revision history for this message
Felix Zielcke (fzielcke) wrote : Re: [Bug 396564] Re: grub2 shows 'biosdisk read error', then boots

Am Samstag, den 05.12.2009, 18:56 +0000 schrieb Stephen Cuka:
> I couldn't find any documentation on the search command in the grub
> wiki, but if the purpose of the search command is to identify all the
> bios devices and raid devices that later grub.cfg commands use to boot
> OS's, shouldn't "--no-floppy" prevent grub from looking at the floppy
> drive at all?

It only prevents use of the floppy device inside the search command,
which should be obvious.
Did you install GRUB 2 to a MBR of a different disk then /boot/grub is
on?
Check if `echo $prefix' or just `set' in GRUB's commandline shows prefix
as a normal GRUB device (hdx,y)/boot/grub or as
(UUID=abc123...)/boot/grub
If it's UUID= it can happen that it still accesses the floppy device.

--
Felix Zielcke
Proud Debian Maintainer and GNU GRUB developer

Revision history for this message
jim_charlton (charltn) wrote :

I don't mean to hijack the "biosdisk read error" thread... but when I follow the advice of message 34 above and check my grub "prefix" I find it is set to (md0)/grub! Shouldn't it be (md0)/boot/grub or simply (md0)/boot? How do I change it in the grub configuration? I will reinstall grub and see if it stays the same.

Revision history for this message
jim_charlton (charltn) wrote :

update-grub
grub-install /dev/sda
grub install /dev/sdb
and then reboot. The grub prefix (set command from the grub shell on booting) is still (md0)/grub!
Hmmmm?? /etc/grub.d/00_header has

..
transform="s,x,x,"
..
..
..
..
grub_prefix=`echo /boot/grub | sed ${transform}`
..

Is that where the grub "prefix" gets set?

Revision history for this message
Felix Zielcke (fzielcke) wrote :

Am Montag, den 07.12.2009, 21:55 +0000 schrieb jim_charlton:
> update-grub
> grub-install /dev/sda
> grub install /dev/sdb
> and then reboot. The grub prefix (set command from the grub shell on
> booting) is still (md0)/grub!
> Hmmmm?? /etc/grub.d/00_header has
>
> ..
> transform="s,x,x,"
> ..
> ..
> ..
> ..
> grub_prefix=`echo /boot/grub | sed ${transform}`
> ..

If your /boot is a seperate RAID array it has to be /grub.
GRUB doestn't really have the concept of linux/unix where you only have
one / and then have all other partitions there mounted.

> Is that where the grub "prefix" gets set?

It gets set in the $grub_mkimage line with the --prefix option.

--
Felix Zielcke
Proud Debian Maintainer and GNU GRUB developer

Revision history for this message
Stephen Cuka (smc003) wrote :

Felix,

Here are answers to the questions that you asked...

>Did you install GRUB 2 to a MBR of a different disk then /boot/grub is
>on?

No, I have a RAID1 boot configuration with GRUB 2 installed to the MBR of both drives so that I can boot from either one.

>Check if `echo $prefix' or just `set' in GRUB's commandline shows prefix
>as a normal GRUB device (hdx,y)/boot/grub or as
>(UUID=abc123...)/boot/grub
>If it's UUID= it can happen that it still accesses the floppy device.

When I check $prefix, it's set to "(md0)/boot/grub.

FWIW, I put about half of a dozen "echo" commands at the top of my grub.cfg file to determine if the error was a result of something in grub.cfg or not. I found that the error occurred before the "echo" commands were processed, so the problem doesn't seem to be related to the contents of grub.cfg. (Which doesn't necessarily rule out the possibility that it's related to finding grub.cfg in the first place...)

Makes me wonder if this error isn't BIOS specific. The machine that I get the error on has AMI bios. I did a klean Karmic install on two different machines with Award bios and didn't see the error on either of them...

Revision history for this message
jim_charlton (charltn) wrote :

Stephen: How are you mounting your raid array? Is it mounted as "/" or as "/boot". In my case, I mount md0 as /boot and only /boot is in the raid array. My "/" is mounted from a lvm volume. If yours is similar, then Fleix's response (message 37) will also apply to you. if md0 is mounted as /boot then there is no "boot" directory on md0 and your grub prefix should be (md0)/grub. But if md0 is mounted as "/", then I assume that the grub prefix should be (md0)/boot/grub as you have stated. At least... that is the way that I have interpreted Felix's response.

Revision history for this message
Brett Howard (brett-livecomputers) wrote :

Just to add my 2 cents... I was seeing this error and got here because I was having problems booting after I'd intentionally degraded an array to replace a disk in it. It was this error that got me here.

After reading through this I tried the idea mentioned to disable the floppy drive that was enabled in the BIOS but not connected to the motherboard and now I no longer have this error.

For those who are interested the reason why I wasn't booting was because my mother board (Asus P5K Pro) doesn't have the primary master / primary slave on SATA ports 1 and 2. They are 1 and 3. So /dev/sdc was actually the Secondary Master drive like one would assume but the secondary master drive is plugged into SATA port 2. Thus I get to restore my array onto that drive (will take about 3 days) and then I can start intentionally degrading the proper drive. Ug!

Anyway thanks to those here for telling me how to remove this error (which really wasn't a bug at all).

Revision history for this message
Paolo Donadeo (paolo.donadeo) wrote :

I can also confirm Stephens workaround: disabling the floppy drive (actually not present in my machine) in the BIOS solves the problem.

Thanks.

Revision history for this message
Azamat S. Kalimoulline (turtle-bazon) wrote :

I can confirm Shephens workaround too.

Revision history for this message
bbqau (bbq) wrote :

I don't know if this is a solution for those of you who run RAID, however for me i just run a single Hard Drive so it appears to have worked.

I disabled my non existent Floppy Drive in the BIOS as per Stephens workaround, i went a few steps further and configured my bios settings.

I disabled all options to utilise RAID and made sure my HDD was connected to SATA I ( I have 6 HDD spots ~ consult your Motherboard Manual or get a magnifying glass out to read it on the mobo ) and i no longer have the problems.

I hope this is how simple the solution is for users like me, i will report back with the stability of this process and let people know if it is sustainable.

Revision history for this message
Per Kongstad (p-kongstad) wrote :

Hi,

I had same issue.

Resolved by disabling FDD in BIOS.

No connected physical FDD but enabled in BIOS seems to generate the error.

Running software raid 1.

Revision history for this message
FriDisch (dumb-kane) wrote :

@Per Kongstad:

Thats not correct: I actually have a physical FDD connected, but I have the 'biosdisk read error' anyway

Revision history for this message
Stephen Cuka (smc003) wrote :

Hi FriDisch,

If you have a physical FDD connected, as a workaround, try having a formatted floppy in the drive when you boot.

Revision history for this message
Waldgeist_dI (noway) wrote :

My Dell PowerEdge T610 has no floppy option in BIOS whatsoever. I still get the biosdisk read error.

Revision history for this message
Klement Sekera (klement-sekera) wrote :

For me, disabling the FDD in BIOS actually much much worsens the situation.
Instead of like 3-5secs wait, I get maybe 30 or more seconds before the message flashes and boot continues.

Revision history for this message
dragonfly (streams0dragonflies) wrote :

I have the same issue as comment #17. I have my version of Karmic U. Studio installed as a software raid with grub 2 in MBR of 1st Sata drive, & I have /boot as a separate non raid partition on same drive. I have not flashed my bios yet since it is so recent and the new versions don't seem to have anything usefull for me. I wanted to confirm that it is not a hardware issue as this was a newly built system. I do have other minor hardware issues so far, one is that I had to remove the IDE CD drive as first boot since I could not complete post without hanging on IDE read and it would reboot - until I switched to SATA drive as the first boot. I don't know if the message has anything to do with this. I am pretty sure that I do not have FDD enabled in my bios.

Revision history for this message
Marcus Tomlinson (marcustomlinson) wrote :

This release of Ubuntu is no longer receiving maintenance updates. If this is still an issue on a maintained version of Ubuntu please let us know.

Changed in grub2 (Ubuntu):
status: Confirmed → Incomplete
Revision history for this message
Marcus Tomlinson (marcustomlinson) wrote :

This issue has sat incomplete for more than 60 days now. I'm going to close it as invalid. Please feel free re-open if this is still an issue for you. Thank you.

Changed in grub2 (Ubuntu):
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.