Last update : 9 June 2011

Installing root on LVM + RAID with debian-installer

This document explains how to set up Debian GNU/Linux on a computer with your root partition on a RAID volume managed with LVM. The procedure assumes that you have 2 indentical hard disks on your computer. Some screens of the Debian ETCH installer are different from before and key entries for older versions are market as "old".

Run the Debian installer for Linux and enter the various machine name, root passord, user name and password, when reaching the main screen "Partitioning disks" , select the "Manual" option.

Read carefully all the options offered on the next screen and notice the table of disk section which should show the two hard disks which you intend to use in RAID mode.

For IDE disks you should see:

  • IDE1 (hda)
  • IDE2 (hdc)

alternatively for SCSI or S-ATA disks you should see:

  • SCSI1 (0,0,0) (sda)
  • SCSI2 (0,0,0) (sdb)

If you do not see theses disks then check your BIOS and cables settings and restart the Debian-installer.

Create the physical volumes for RAID

From the screen "[!!] Partition disks" in the "Table of disks"

  • Select the disk: IDE1 (hda) or SCSI1 (sda) as appropriate
  • Create new empty partition table on this device = {Yes}
  • Select the line "FREE SPACE" just under the IDE1 or SCSI1 disk
  • Create a new partition (which will be used for booting) size = 512M
  • Type for the new partition = Primary
  • Location for the new partition = Begining
  • Partition setting: Use as = physical volume for RAID
  • Done setting up the partition

Now Select the line "FREE SPACE" just under the newly created partition:

  • "Create a new partition, size = 117G" : Enter a value smaller than the free space value minus 2% or the disk size to make sure that when you will later install a new disk in replacement of a failed one, you will have at least the same capacity even if the number of cylinders is different. For exemple with disk of 120GB enter 117G = 120G minus 512M already allocated minus 2G (rounded to the nearest lower Giga Byte).
  • Type for the new partition = Primary
  • Partition setting: Use as = physical volume for RAID
  • Done setting up the partition

Repeat the above procedure for the second disk IDE2 (hdc)or SCSI2 (sdb) as appropriate.

Check that both disk have identical partitions and the free space left is about 2% of the disk size, this is important to be able to later install replacement disks which are of identical size to the original disks but may not have the same number of heads and cylinders. If not you can re-select and correct the disk with the wrong partition.

Configure software RAID

From the screen "[!!] Partition disks" select: Configure software RAID

"Write the changes to the storage devices and configure RAID select: {yes}

"This is the multi-disk (MD) and software RAID configuration menu" select:

  • Create a multi-disk (MD) device
  • Multi-disk device type = RAID 1
  • Number of active devices for the RAID1 array = 2
  • Number of spare devices for the RAID1 array = 0

"Active devices for the RAID1 multi-disk device" select with the space bar from the Table of disks list the two devices ending with "1" like below

  • [*] /dev/.../.../part1 or [*] /dev/sda1 for SCSI disks
  • [*] /dev/.../.../part1 or [*] /dev/sdb1 for SCSI disks
  • Select {Continue}

"Multidisk configuration actions" select:

  • Create MD device
  • Multidisk device type = RAID1
  • Number of active devices for the RAID1 array = 2
  • Number of spare devices for the RAID1 array = 0

"Active devices for the RAID1 multidisk device" select this time from the list the two devices ending with "2" like below:

  • [*] /dev/.../.../part2 or [*] /dev/sda2 for SCSI disks
  • [*] /dev/.../.../part2 or [*] /dev/sdb2 for SCSI disks
  • Select {Continue}
  • "Multidisk configuration action" = {Finish}

This sets up the RAID1 and the partition disk screen will now show in the "Table of disks" two RAID1 devices #0 and #1.

Create the physical volumes for LVM

From the screen "[!!] Partition disks" in the "Table of disks"

  • Select #1 below the line RAID1 device #1 (it is normally the largest device)
  • Use as = Physical volume for LVM
  • Done setting up the partition

Configure LVM

From the screen "[!!] Partition disks": 

  • Select "Configure the logical Volume Manager"
  • Write the changes to disks and configure LVM = {Yes}

LVM configuration action

  • Create volume groups ("old" Modify volume groups (VG) )
    • Volume group name = vg00
    • Select [*] /dev/md/1 () then {Continue}
  • Create logical volumes ("old" Modify logical volumes (LV) )
    • Volume group = vg00
    • Logical volume name = swap
    • Logical volume size = 3G (or at least twice the size of the RAM)
  • Create logical volumes
    • Logical volume name = root
    • Volume group = vg00
    • Logical volume size = 100G (or whatever you need, but leave at least 2% headroom with respect to the partition size).
  • Create logical volumes
    • Logical volume name = home
    • Volume group = vg00 (note how much space is left)
    • Logical volume size = leave the default value or use any value smaller than what is left free on the disk
  • Select {Finish}
Leave again to launch the partitioning and return to the main screen "Partition disks".

Create the partitions

In this phase on old Debian 4 or 5 you may encounter warning messages, ignore them and just hit {Continue}.

From the screen "[!!] Partition disks"

In the "Table of Disks" LVMVG vg00 LV root select: #1

  • Use as = Ext3 journaling file system
  • Mount point = / -the root file system
  • Done setting up the partition

In the "Table of Disks" LVMVG vg00, LV home select: #1

  • Use as = Ext3 journaling file system
  • Mount point = /home
  • Done setting up the partition

In the "Table of Disks" LVMVG vg00, LV swap select: #1

  • Use as = swap area
  • Done setting up the partition

In the "Table of Disks" RAID1 device #0 select: #1

  • Use as = Ext3 journaling file system
  • Mount point = /boot -static file of the boot loader
  • Done setting up the partition

To finish

From the screen "[!!] Partition disks" select "Finish partitioning and write changes to disk"

The screen will now give you a summary of what has been done and to the prompt "Write changes to disks?" answer {Yes}

For large disks itmay take a while before all the partitions are written...

You may now continue by first installing GRUB on the boot sector and proceed with the complete installation of Debian which will be written symmetrically on the two disks in RAID.

Enjoy Debian with the RAID set up for both boot and data such that even if one disk crashes you will be able to restore your configuration (do not forget to give to your new disk partitions identical to the old disk and to copy GRUB to the boot partition if necessary).

To restore a new disk in RAID

Please note that the following is not fully validated

Setting up the new disk

The first thing to do is to install the new disk and check it's mounting point

Assuming that the new disk is /dev/hdc, type the following commands as su:

  • fdisk -l ; This will return the disks installed and you should see the new hdc unpartitioned.
  • cfdisk /dev/hdc . In the cfdisk menu select the free space and create 2 partitions "Linux raid autodetect" identical to those of the master disk.
  • New 512M hdc1, Primary, Beginning, Type FD "Linux raid autodetect"
  • New 117G hdc2 , Primary, Beginning, Type FD "Linux raid autodetect" (note that the 512M and 117G shall be at least equal to or greater than the corresponding partition on the old disk. As you probably changed the disk for a larger one, you can allocate more for hdc2 but again keep 2% margin of the disk size).
  • Write
  • Quit to exit cfdisk menu
  • Reboot the PC.

launching RAID replication

Type the following commands as su:

  • mdadm /dev/md0 --add /dev/hdc1
  • mdadm /dev/md1 --add /dev/hdc2

Wait a minute and then check the status of the raid with the command:

  • cat /proc/mdsat to list all active devices

cat /proc/mdstat will return something like:

  • Personalities : [raid1]
  • md0 active raid1 hdc1[0] hda1[1] 12345 blocks [2,2] [UU]
  • md1 active raid1 hdc2[2] hda2[2] 56789 blocks [2,1] [_U] [==>....] recovery 9% (1234/56789) finish=35.1min speed=45678k/sec

[2,2] [UU] indicate that the partition hdc1 is in raid with 2 good disks out of 2.

[==>....] recovery 9% and rest of the line indicate how far the partition hdc2 is synchronised and how long it will take to completion.

Useful commands to check the RAID

  • mdadm -h to get the manual
  • mdadm -F to verify the disks
  • cat /proc/mdstat to list all active devices and status of the raid (see above)
  • fdisk -l to list all disks details