How to change RAID1 superblock from 1.2 back to 0.9 to install grub (debian squeeze)

[vc_row][vc_column][vc_column_text]For some reason, it seems like everything I do is not like what everyone else does… or at least not what the people writing the software I use do.  I started writing this as a how-to for others in this situation, but in the end it turned out to be more of an amusing story.  Maybe someone will find it useful anyways…

The background:

I was building a new linux server for my home office and since I have been having good luck with Debian in the last couple of years, I decide to use it as the OS on this box too. Intending to keep it as simple as possible, I created a basic partitioning scheme on all of the drives (the same scheme I have been using for years now) and run into fatal errors when I get to the installation of grub.

Here’s how I partition the drives:
Partition 1: Primary, 8GB, Linux RAID – going to use RAID1 for the /boot file system
Partition 2: Primary, 20GB, Linux RAID – going to use RAID5 for the / file system
Partition 3: Primary, 1960GB, Linux RAID – going to use RAID5 for the /data file system
Partition 4: Primary, 1GB, Linux Swap
Plus a little bit of slack at the end of the drive.

All of the drives are identical 2TB Western Digital Green SATA Drives.  There are now 7 in the system.

The error:

[/vc_column_text][hcode_simple_image hcode_mobile_full_image=”1″ alignment_setting=”1″ desktop_alignment=”aligncenter” ipad_alignment=”sm-aligncenter” mobile_alignment=”xs-aligncenter” padding_setting=”1″ desktop_padding=”padding-five” ipad_padding=”sm-padding-three” mobile_padding=”xs-padding-one” hcode_image=”346″][vc_column_text]If you press ALT-F4, you will switch over to the install log console and you will see some mumbo-jumbo about grub not finding anything it can use to live on. I didn’t copy down the error message, but it’s cryptic and scary like any good linux error message should be.

The Fix – Part 1:

Switch to an alternate console (ALT-F2), press [enter] to access the shell.
Unmount the filesystem the installer is using for /boot. (/target/boot)
umount /target/boot
Stop the array you are trying to put /boot on. (/dev/md0 in my case)
mdadm --stop /dev/md0
Now re-create the array using a v0.90 superblock instead of the now-default v1.2
mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1

At this point I couldn’t get the system to re-mount the filesystem properly, so I just rebooted and restarted the installation.

This time when running through the installer I told it to leave my arrays alone, but I had to tell it where to put the filesystems again. (/boot goes on /dev/md0 and / goes on /dev/md1, just like last time…) Sure, go ahead and reformat the partitions – just don’t mess with my arrays!

Lo and behold, the installer manages to get through this time. Now I get a flashing cursor instead of a flashing cursor.

Stupid linux.

The Fix – Part 2:

So this time I when I restart the installer I switch over to a console right after it finishes detecting the disks and take a look at the arrays. For some reason my /dev/md0 (/boot) array is now on /dev/sde1 and /dev/sdf1. (6 drive system at this point – no wonder I got the flashing cursor, since it couldn’t find a boot loader…)  Not sure why the drives shuffle around, but whatevs. I can fix that with some crazy boot force stuff.
mdadm --stop /dev/md0
mdadm --create /dev/md0 --metadata=0.90 --level=1 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

See what I did there?  Ok, not the most elegant solution, but those partitions weren’t going to be used for anything else anyways, and there really shouldn’t be any performance issues, since how often do you load the kernel, really?  I’ve often wondered about many-drive RAID1 arrays… and now I have one of my own.[/vc_column_text][/vc_column][/vc_row]

1 reply
  1. timinski
    timinski says:

    Thanks!! Been struggling with a 1.2 / 0.90 superblock issue with a 4 disk Debian RAID1 setup I’m building for a friend to host OpenMediaVault. The key issue is I’d like to avoid wasting all of the 2 x 500G space available by setting up a Debian 7.x bootable md0 on 2 x +/-150G and the balance of the 2 x 350G on RAID1 MD1 and LVM….so he can expand his library in the future. Haven’t had this much “fun” in awhile. Still not sure which superblock I should use for MD0 and beening able to switch (as per your doc’s) will save me a bleep-load of time…..vs. what I’ve been currently doing: wiping the 2x500G with DD. Again, many thanks!


Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *