RAID not implemented (use alternate CD instead)

Bug #44609 reported by Dominik Kubla
166
This bug affects 26 people
Affects Status Importance Assigned to Milestone
mdadm (Ubuntu)
Invalid
Wishlist
Unassigned
Quantal
Invalid
Wishlist
Unassigned
ubiquity (Baltix)
New
Undecided
Unassigned
ubiquity (Ubuntu)
Invalid
Wishlist
Unassigned
Quantal
Won't Fix
Wishlist
Unassigned

Bug Description

When trying to install from the Flight 7 Live CD (both Ubuntu and Kubuntu) on a system already running Debian Linux with mirrored partitions, the installer does not recognise the meta devices and insists on working with the underlying disk partitions.

The kernel of the Live CD recognises and starts the meta devices just fine.

I would prefer the installer to offer the use of the meta devices, since using the underlying partitions makes it tricky to encapsulate the partitions later in order to get the mirror structure back. Since Ubuntu aims for the corporate environment, it is imperative to support mirrored setups, even for the desktop! Many organisations, especially in the financial sector prefer it that way.

If you decide to fix this bug, please consider to adjust the grub setup as well, to allow for automatic boot from the mirror disk in case the root disk fails.

Attached you will find the disk layout as used on the system (taken from the installed Debian Sid). Additional information is available upon request.

# /sbin/sfdisk -l /dev/hda

Disk /dev/hda: 158816 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start End #cyls #blocks Id System
/dev/hda1 * 0+ 31001 31002- 15624976+ fd Linux raid autodetect
/dev/hda2 31002 38751 7750 3906000 fd Linux raid autodetect
/dev/hda3 38752 79441 40690 20507760 fd Linux raid autodetect
/dev/hda4 79442 158815 79374 40004496 fd Linux raid autodetect

# /sbin/sfdisk -l /dev/hdc

Disk /dev/hdc: 158816 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start End #cyls #blocks Id System
/dev/hdc1 * 0+ 31001 31002- 15624976+ fd Linux raid autodetect
/dev/hdc2 31002 38751 7750 3906000 fd Linux raid autodetect
/dev/hdc3 38752 79441 40690 20507760 fd Linux raid autodetect
/dev/hdc4 79442 158815 79374 40004496 fd Linux raid autodetect

# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 hda4[0] hdc4[1]
      40004416 blocks [2/2] [UU]

md2 : active raid1 hda3[0] hdc3[1]
      19591168 blocks [2/2] [UU]

md1 : active raid1 hda2[0] hdc2[1]
      3903680 blocks [2/2] [UU]

md0 : active raid1 hda1[0] hdc1[1]
      15623104 blocks [2/2] [UU]

unused devices: <none>

# swapon -s
Filename Type Size Used Priority
/dev/md1 partition 3903672 0 -1

# mount
/dev/md0 on / type ext3 (rw,errors=remount-ro)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
usbfs on /proc/bus/usb type usbfs (rw)
tmpfs on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md2 on /export/home type ext3 (rw)
/dev/md3 on /export/scratch type ext3 (rw)
tmpfs on /tmp type tmpfs (rw,mode=1777,size=2G)
tmpfs on /dev type tmpfs (rw,size=10M,mode=0755)
automount(pid6343) on /net type autofs (rw,fd=4,pgrp=6343,minproto=2,maxproto=4)
automount(pid6351) on /home type autofs (rw,fd=4,pgrp=6351,minproto=2,maxproto=4)
nfsd on /proc/fs/nfsd type nfsd (rw)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/export/home/dominik on /home/dominik type none (rw,bind)

Tags: patch
Revision history for this message
Colin Watson (cjwatson) wrote :

We decided, I'm afraid, that we did not have time to support RAID in Ubiquity's partitioner in Dapper. Looking at the current pile of bug reports I have to deal with, I still think this was the correct decision. Folks who need to install Dapper on RAID should use the text-mode install CD.

Revision history for this message
Dominik Kubla (dbkubla) wrote :

That's fine by me. I wasn't aware that this functionality was available in the text-mode install CD.

Colin Watson (cjwatson)
Changed in ubiquity:
status: Unconfirmed → Confirmed
Revision history for this message
TJ (tj) wrote :

This also/still affects Hardy, but there is a work-around.

First, once the Live CD has started, open a terminal and run:

sudo su
sed -i 's,^\( *grep -v '^/dev/md' |\),#\1,' /lib/partman/init.d/30parted | grep '/dev/md'

This will allow Ubiquity's partman to report /dev/md* devices by commenting out the script statement that ignores them.

Next, manually install a file-system to the md device. E.g:

sudo mkfs.ext3 -L boot /dev/md0

Now when Ubiquity's partman runs in manual mode you'll see something like this in the partition list:

/dev/md0
  /dev/md0 ext3 256 MB 14 MB

Select the indented /dev/md0, press the "Edit partition" button, set the file-system type to ext3, tick the Format check-box, and set the mount-point.

Ubiquity/partman will now successfully format and install to /dev/md0 when the install is started.

Revision history for this message
Mantas Kriaučiūnas (mantas) wrote : RAID support can be implemented in Ubiquity with simple patch

RAID support can be implemented in Ubiquity with simple 3-part patch - I just installed Ubuntu 8.04 LiveCD based Baltix distro into software RAID devices:

1. /lib/partman/init.d/30parted should report /dev/md* devices to Ubiquity - line "grep -v '^/dev/md' | " should be removed from 30parted file. I've commented out this line with:
sed -i "s,\( *grep -v '^/dev/md' |\),#\1," /lib/partman/init.d/30parted

2. Sofware RAID volumes (devices) should be visible, ideally this should be implemented in manual partitioning step of Ubiquity. Currently I've executed these commands:
mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sda1 /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2

3. /target system should have mdadm package installed before initrd and grub-install are executed in /target . I'm suggesting to include mdadm package in CD's pool folder and patch Ubiquity to automatically install mdadm into /target if user decided to install Linux into md device (/dev/md* are assigned to one or more folders).
If mdadm package isn't in /target when Ubiquity executes grub-install then Ubiquity finishes installation at 94% with fatal error:
grub-installer: info: Running chroot /target grub-install --no-floppy "(hd0)"
grub-installer: /usr/sbin/grub-install: 446: mdadm: not found
grub-installer: : mdadm -D /dev/md0 failed
grub-installer: Searching for GRUB installation directory
 ... found: /boot/grub
grub-installer: The file /boot/grub/stage1 not read correctly.
grub-installer: error: Running 'grub-install --no-floppy "(hd0)"' failed.

Currently I solved this problem simply by executing this command after Ubiquity installs language packs (at ~85%):
sudo chroot /target/ apt-get install mdadm

Revision history for this message
Ace Suares (acesuares) wrote : Re: [Bug 44609] RAID support can be implemented in Ubiquity with simple patch

Very cool ! Kudos to you :-)

ace
On Wednesday 23 July 2008, Mantas Kriaučiūnas wrote:
> RAID support can be implemented in Ubiquity with simple 3-part patch -
> I just installed Ubuntu 8.04 LiveCD based Baltix distro into software
> RAID devices:
>
> 1. /lib/partman/init.d/30parted should report /dev/md* devices to
> Ubiquity - line "grep -v '^/dev/md' | " should be removed from 30parted
> file. I've commented out this line with: sed -i "s,\( *grep -v
> '^/dev/md' |\),#\1," /lib/partman/init.d/30parted
>
> 2. Sofware RAID volumes (devices) should be visible, ideally this
> should be implemented in manual partitioning step of Ubiquity.
> Currently I've executed these commands: mdadm --create /dev/md0
> --level=1 --raid-disks=2 /dev/sda1 /dev/sdb1 mdadm --create /dev/md1
> --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2
>
> 3. /target system should have mdadm package installed before initrd and
> grub-install are executed in /target . I'm suggesting to include mdadm
> package in CD's pool folder and patch Ubiquity to automatically install
> mdadm into /target if user decided to install Linux into md device
> (/dev/md* are assigned to one or more folders). If mdadm package isn't
> in /target when Ubiquity executes grub-install then Ubiquity finishes
> installation at 94% with fatal error: grub-installer: info: Running
> chroot /target grub-install --no-floppy "(hd0)" grub-installer:
> /usr/sbin/grub-install: 446: mdadm: not found
> grub-installer: : mdadm -D /dev/md0 failed
> grub-installer: Searching for GRUB installation directory
> ... found: /boot/grub
> grub-installer: The file /boot/grub/stage1 not read correctly.
> grub-installer: error: Running 'grub-install --no-floppy "(hd0)"'
> failed.
>
> Currently I solved this problem simply by executing this command after
> Ubiquity installs language packs (at ~85%): sudo chroot /target/
> apt-get install mdadm

Revision history for this message
Mantas Kriaučiūnas (mantas) wrote :

It seems ubiquity from Ubuntu Hardy 8.04.1 crashes when using RAID partition for swap :(
Ubiquity tries automatically use RAID components - /dev/sda3 and /dev/sdb3 for swap, so, in manual partitioning step you should manually click on *each* swap partition, which is included in RAID and choose "do not use partition".

Revision history for this message
Tormod Volden (tormodvolden) wrote :

> you should manually click on *each* swap partition, which is included in RAID and choose "do not use partition".

Testing Jaunty Alpha-6: If you select "do not use partition", the label changes from "swap" to "linux-swap" and the summary afterwards says it is going to make changes to the partitions of /dev/sda and /dev/sdb.

Revision history for this message
Tormod Volden (tormodvolden) wrote :

My comment above is now obsolete, it is just that you have to tell ubiquity that you are using dmraid (there will be a UI for this later) by running these commands beforehand:
 sudo mkdir /var/lib/disk-detect
 sudo touch /var/lib/disk-detect/activate_dmraid

Then ubiquity will only show the raids, and no raw devices. Nice!

Revision history for this message
Saivann Carignan (oxmosys) wrote :

Colin Watson and Tormod Volden :

This can be enabled in karmic since we now have grub2 which support direct raid boot.
Making it possible to install ubuntu on raid requires very little changes now :

1. Install mdadm by default into the livecd environment. (676 Kb)
2. Patch ubiquity to run these commands in the /target chroot AFTER raid has been partitioned and ubuntu has been installed to make / accessible from initramfs by updating mdadm.conf :
chroot /target /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
chroot /target update-initramfs -u

With these two small changes, it's possible for a user that knows how to partition and raid from command line to install ubuntu on his raid device from ubiquity, using manual partitioning. I tried it and it work perfectly. And it's the first step to make raid accessible from LiveCD. And since grub2, it's not necessary for partman to make sure that /boot is installed on a different partition.

Left to implement later :
Ubiquity to manage and offer RAID during installation (looks like Palimpsest is already starting to offer functionalities related to raid, which are limited at the moment).

Revision history for this message
Ace Suares (acesuares) wrote : Re: [Bug 44609] Re: RAID not implemented (use alternate CD instead)

vote +1

Saïvann Carignan wrote:
> Colin Watson and Tormod Volden :
>
> This can be enabled in karmic since we now have grub2 which support direct raid boot.
> Making it possible to install ubuntu on raid requires very little changes now :
>
> 1. Install mdadm by default into the livecd environment. (676 Kb)
> 2. Patch ubiquity to run these commands in the /target chroot AFTER raid has been partitioned and ubuntu has been installed to make / accessible from initramfs by updating mdadm.conf :
> chroot /target /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
> chroot /target update-initramfs -u
>
> With these two small changes, it's possible for a user that knows how to
> partition and raid from command line to install ubuntu on his raid
> device from ubiquity, using manual partitioning. I tried it and it work
> perfectly. And it's the first step to make raid accessible from LiveCD.
> And since grub2, it's not necessary for partman to make sure that /boot
> is installed on a different partition.
>
> Left to implement later :
> Ubiquity to manage and offer RAID during installation (looks like Palimpsest is already starting to offer functionalities related to raid, which are limited at the moment).
>

Revision history for this message
Saivann Carignan (oxmosys) wrote :

Colin Watson or Tormod Volden : Here is a debdiff that add a target-config script to mdadm package, which prevent the system from halting on busybox at reboot after installing to a RAID device with ubiquity. This script does what I said earlier (update mdadm.conf on target and update-initramfs if it detects the presence of raid devices). Unless ubuntu modify mdadm initramfs hook script to auto-assemble raid devices, this script handle gracefully current static mdadm.conf configuration.

This script certainly require to be improved to look at something more revelant than availability of /dev/md* devices, but it would be certainly very easy to polish before karmic final release (4 line of code). And it would let user benefit from this great feature without hitting this bad bug. Real raid support on default ubuntu live iso could be considered later.

I tested my debdiff and it build / install correctly and it work as designed when mdadm is pre-installed in ubuntu iso (ubiquity install / on raid device, and ubuntu don't hang on busybox at first boot).

Revision history for this message
Tormod Volden (tormodvolden) wrote :

Saïvann, I haven't done a fresh installation with dmraid for a long time, but I hope to get to trying it next week. I can test your patch then. But why is mdadm needed to boot the dmraid system?

Revision history for this message
Saivann Carignan (oxmosys) wrote :

Tormod Volden : I'm not a expert, but I think that mdadm is necessary for linux software raid (not fakeRaids). I would not be able to confirm you if ubuntu can be installed on fakeRaids controller from liveCD at the moment, but it is not possible at the time with linux software raid. The alternate install CD offer the creation of software RAID devices, and it install mdadm on target system. mdadm seems to be necessary as it monitor raid and assemble them based on config files. I looked quickly at dmraid and it didn't seem to support linux software raid, so I guess that both mdadm and dmraid are necessary.

Also, as said earlier, if mdadm was configured to auto detect and assemble raid arrays in his initramfs hook script (mdadm --assemble --scan), my fix would not be necessary. If this solution makes sense, this would be by far more flexible than my bugfix, however there might be reasons why mdadm is not already configured in that way, needs to be confirmed.

Revision history for this message
Tormod Volden (tormodvolden) wrote :

Sorry, I didn't read the whole report, only the bits with my name on it, and thought for some reason this was about dmraid. You're likely right that mdadm is needed to boot from software raids. I have no software raid so I can not test this.

Revision history for this message
Saivann Carignan (oxmosys) wrote :

Basically, software raid does not require special hardware so this should be easily reproducible on any computer. All that is needed is a LiveCD that has fixed mdadm pre-installed, at least two partitions with raid flag and a command like this one to create the raid device : "mdadm --create /dev/md0 --raid-devices=2 --level=0 /dev/sda1 /dev/sda2" After what it can be formatted (mkfs.ext4 /dev/md0) and appears in ubiquity manual partitionning. As long as /etc/mdadm/mdadm.conf is up to date, raid devices are automatically assembled in initramfs at each reboot, but they can also be assembled manually (mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1).

Mathias Gug (mathiaz)
Changed in mdadm (Ubuntu):
importance: Undecided → Wishlist
status: New → Confirmed
ceg (ceg)
tags: added: patch
Revision history for this message
Sebastien Bacher (seb128) wrote :

is that still an issue? could somebody review the change?

Revision history for this message
Michael Vogt (mvo) wrote :

Thanks for the bugreport and the debdiff. AFAICS (please note that I'm not a mdadm expert) the patch generates a mdadm config on the target filesystem. This should actually no longer be needed in order to get a booting system.

From the mdadm "hook" script for initramfs (in natty):

CONFIG=/etc/mdadm/mdadm.conf
...
if [ ! -f $CONFIG ]; then
        # there is no configuration file, so let's create one
        if /usr/share/mdadm/mkconf generate $CONFIG; then
                # all is well
                cp -p $CONFIG $DESTMDADMCONF
                info "auto-generated the mdadm.conf configuration file."
        else
                # we failed to auto-generate, so let the emergency procedure take over
                warn "failed to auto-generate the mdadm.conf file."
                warn "please read /usr/share/doc/mdadm/README.upgrading-2.5.3.gz ."
        fi
...

I unsubscribe ubuntu-sponsors for now, could you please re-subscribe and update the debdiff to the latest
version if it turns out that this needs more than what mdadm is doing now.

Changed in mdadm (Ubuntu):
status: Confirmed → Incomplete
Revision history for this message
Michael Vogt (mvo) wrote :

Just for reference, this mdadm change was part of:

mdadm (2.6.7.1-1ubuntu16) maverick; urgency=low

  * debian/initramfs/hook: Added following code (invoked on update-initramfs)
    (LP: #617725):
    - create a mdadm.conf if it is not found in /etc and copy it in initramfs
    - update an existing mdadm.conf in the initramfs if it does'nt include
      a definition of any array
    - warn the user if the definition of an active array is not found in the
      initramfs/etc/mdadm.conf

 -- Surbhi Palande <email address hidden> Mon, 13 Sep 2010 18:59:03 +0300

Revision history for this message
Matthew Paul Thomas (mpt) wrote :
Download full text (3.2 KiB)

I'm currently designing a graphical interface for setting up RAID in the installer. Thanks to James Troup for helping me understand the following, but any mistakes are my fault; please provide any corrections in plain English.

1. The types of RAID it makes most sense to offer in a graphical installer (where there are likely few disks) are RAID 1, RAID 10, and RAID 5, in that order. RAID 4 is a bad version of RAID 5. RAID 6 is like RAID 5, but allows two concurrent disk failures rather than one, in exchange for being slower. And RAID 0 isn't really RAID at all, but a close alternative to LVM (bug 43453), though it is possible to run one on top of the other. Other mdadm configurations such as LINEAR and MULTIPATH are different enough in kind that they should be designed and implemented separately.

2. All of the above RAID configurations consist of a minimum of two partitions *and/or* entire disks.

3. If a RAID device uses a partition rather than an entire disk, the partition must be on a different disk from every other RAID partition.

4. A RAID device has a filesystem type, a mount point, and a size, like a normal partition does.

5. The effective size of a RAID 0 device is the total of the partitions/disks that form it. The effective size of other RAID levels is, roughly, the minimum size of all the partitions/disks used in the device.

6. Giving useful advice about which RAID level to choose involves communicating about (a) read speed, (b) write speed (both as a rough multiple of normal), (c) space efficiency (exact math), (d) probability of failure, and (e) time to rebuild from failure.

7. Once set up, a RAID device can itself be partitioned.

Meanwhile, these are the basic design approaches I've thought of so far:

A. You create or format at least two partitions with filesystem type "RAID partition". Then you choose "Create RAID Device…" somewhere, and choose which of those partitions should be part of the device. Pro: Unobtrusive (one extra filesystem type in the menu, and one extra button), and familiar to Fedora/RHEL users. Con: If you don't know exactly what you're doing, probably it will be an error message explaining it to you ("Sorry, you need to set up at least two RAID partitions before you can set up a RAID device"), and there's little hint of what the cumulative effect of your choice of partitions/disks will be or whether you've even set up enough yet.

B. "Set Up RAID..." somewhere opens a secondary assistant for choosing the RAID type, followed by setting up individual partitions for the device. Pro: Room to explain the various options, and to communicate the size and effectiveness of the number of partitions/disks you've set up so far. Con: Nested assistants (eww), and the interface for setting up partitions/disks is separate from the usual one.

C: "Set Up RAID..." somewhere uses a variation of the normal "New Partition" form, and the device then sits somewhere in the window telling you how many partitions you've specified should be part of it so far. Pro: Cumulative effect is obvious. Con: Sitting somewhere in the window is probably weird.

For all of these, after setup, the RAID device starts appearing as a separate pa...

Read more...

Revision history for this message
Tom Womack (tom-womack) wrote :

Point three isn't true; I've had setups with sda1+sdb1 forming a RAID1 pair and sda2+sdb2 forming a RAID0 pair

Point five isn't true for RAID5, where the capacity is (minimum size of devices) * (number of devices - 1)

I would just offer RAID1 as an option, and require selection of two drives on which partitions of the same size are created.

Revision history for this message
Matthew Paul Thomas (mpt) wrote :

Thanks Tom for those corrections.

One thing that hasn't been clear in the discussion so far is the distinction between recognizing an existing RAID setup, and creating a new RAID setup. The kind of setups Ubiquity can create could be a subset of the kind of existing setups it can recognize.

Here's my latest sketches.

Revision history for this message
Matthew Paul Thomas (mpt) wrote :
Revision history for this message
Matthew Paul Thomas (mpt) wrote :

2. <http://docs.fedoraproject.org/en-US/Fedora/13/html/Installation_Guide/Create_Software_RAID-x86.html> says that RAID 1 requires a minimum of two partitions, but that RAID 5 requires a minimum of three, and RAID 6 and RAID 10 require a minimum of four.

Revision history for this message
Sam Hartsfield (samh) wrote :

Normal RAID 10 requires at least 4, but Linux MD RAID 10 only needs two (and can work on uneven numbers like 3), and it has the option of 'near', 'offset', and 'far' layouts. I'm not sure how much of that, if any, you'd want to put in the standard installer.

See http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10 and the man page for mdadm.

Changed in mdadm (Ubuntu):
status: Incomplete → Invalid
Revision history for this message
Mantas Kriaučiūnas (mantas) wrote :

Ubuntu 12.04 Desktop can be installed into Linux Software RAID (md) storage with these steps:

1. I've installed mdadm in live system, then created Linux Software RAID with this command:

sudo mdadm --create /dev/md0 --level=1 --raid-disks=2 /dev/sda1 /dev/sdb1

then formated /dev/md0 with ext4 file system, because Ubiquity doesn't allow to choose md0 until it's formated with filesystem

2. In Ubiquity step "Downloading language packages" (In Lithuanian - "Atsiunčiami kalbos paketai") I installed mdadm in target system with these commands:

sudo cp /var/cache/apt/archives/mdadm_3.2.3-2ubuntu1_i386.deb /target/var/cache/apt/archives/
sudo chroot /target/ dpkg -i /var/cache/apt/archives/mdadm_3.2.3-2ubuntu1_i386.deb

Revision history for this message
Steve Langasek (vorlon) wrote :

("wontfix"ing for quantal, for blueprint tracking)

Changed in ubiquity (Ubuntu Quantal):
status: Confirmed → Won't Fix
Revision history for this message
Tobias Bradtke (webwurst) wrote :

I guess this also affects "Ubuntu 13.04 Raring Ringtail".

Is this bug beeing worked on? Regarding the Blueprint, there seems to be not much activity: https://blueprints.launchpad.net/ubuntu/+spec/foundations-r-ubiquity-raid

Revision history for this message
Nicolas Delvaux (malizor) wrote :

Will it be possible to install Raring with RAID?

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

On 26 March 2013 20:34, Nicolas Delvaux <email address hidden> wrote:
> Will it be possible to install Raring with RAID?
>

Yes, using server cd or mini.iso or pxe boot. All of them offer to
install ubuntu-desktop as well as the default environment.

Regards,

Dmitrijs.

Revision history for this message
cwsupport (netsupport) wrote :

The attitude that RAID and RAID+LVM is not worthy of support is silly.
There is a reason why so many people used the Alternate CDs. In the corporate environment protecting work on peoples computers through the use of RAID1 is a sensible option.

12.04 LTS Alternate CD works perfectly. Yet the support was dropped for RAID+LVM installs. WHY?

The suggestion of Install the Server CD on the Desktop then download stupid amounts of data to install your chosen desktop of choice is not a choice - its a botched up workaround due to removal of functionality that WAS in place.

Please put the RAID+LVM configuration into the installer. Worse still - 13.10 reports my existing RAID+LVM partitions as 'ntfs' and 'unknown'...

Revision history for this message
Dimitri John Ledkov (xnox) wrote :

Desktop images do have mdadm and lvm2 packages installed, and it does have support to set up full disk encryption and lvm2. Installing RAID on typical desktop configurations is uncommon. But you can proceed to do so by manually assembling RAID devices before starting ubiquity in "Try Ubuntu" session. Do mind that you will need to chroot into target at the end of installation to install mdadm & update initramfs. This can be automated / scripted.

Do note that ubiquity installer had fakeraid support for a long time, and those do get auto-assembled on boot.

Revision history for this message
Marcus Tomlinson (marcustomlinson) wrote :

This release of Ubuntu is no longer receiving maintenance updates. If this is still an issue on a maintained version of Ubuntu please let us know.

Changed in ubiquity (Ubuntu):
status: Confirmed → Incomplete
Revision history for this message
Marcus Tomlinson (marcustomlinson) wrote :

This issue has sat incomplete for more than 60 days now. I'm going to close it as invalid. Please feel free re-open if this is still an issue for you. Thank you.

Changed in ubiquity (Ubuntu):
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.