One of the easiest ways to set up RAID (Redundant Array of Independent Disks) is during the ClearOS install. If you missed your opportunity to do so but left some free space, or have space on other drives you can set up RAID via command line. This guide presupposes that you have ALREADY created your partitions and have them labeled as RAID devices. This video is a great place to get started and this documentation will help you with the rest. Feel free to 1) ask questions on the forum and reference this howto, or 2) Obtain ClearCARE support and issue a ticket.
ClearOS uses Multi-disk to make software RAIDs. These can be coupled with other types of RAID including other technologies operating at the file system level. LVM, for example, is also capable of many RAID styles can could be used, for example, to take several RAID 1 devices and make them into a singular volume called RAID 1+0.
Throughout these examples in this howto, check the manual for greater detail.
Once you finish your RAID layout, you can either format your device with a filesystem or use LVM to get some additional management features and benefit (recommended).
Multi-disk supports the following modes, including RAID modes:
The most common RAID types are RAID0, RAID1, RAID5, and RAID6. The following table shows some of the benefits and expected results. For this table, we will use ALL 1 TB disks. In the number row, we list the disks, and below is the capacity. RAID 4, 5, 6, and 10 all provide a level of RAID.
- Not supported / Not available
* Not redundant / degraded
Managing the array requires that we should view the status of the array. We can view the status of our array with:
If the status shows that it is in progress, you can use the ‘watch’ command to have the command re-issued every 2 seconds:
watch cat /proc/mdstat
The [UU] output is significant. This represents that there are 2 devices in the array and they are both up. If there were 5 devices and it was up, it would say [UUUUU]. If there were 5 devices and one was down, it would say [UUUU_].
For even more detail, run:
mdadm --detail /dev/md0
Because you will use this command throughout your RAID configuration, some admins will open a second shell and then just run the watch in the second window.
Various RAID types are available. Some will work directly with the multi-disk array while others use a combination of multi-disk and LVM partitions. Naturally, you will not create arrays of redundancy across the same, single disk. Identify your partitions that are of close size:
RAID0 is usually preferable to linear modes because you will write and read from ALL disks at the same time. With linear disks, you will only read from a disk if the data is on it. To create a striped volume set that does NOT have redundancy two or more partitions, run similar to the following:
mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb6 /dev/sdc5 mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=3 /dev/sdb7 /dev/sdc8 /dev/sdc9
RAID1 creates all data on both volumes. It reads from one disk while performing writes to all disks. To create a mirror between two partitions, run:
mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb3 /dev/sdc3
Here is a RAID1 with a spare.
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb3 /dev/sdc3 --spare-devices=1 /dev/sdd3
Why would you even use RAID1 across more than 2 disk? Well, you may have a partition that you really, really care about. For example, you may have some very large disks and want to have your system ALWAYS bootable even if only 1 disk survives. This would be a great configuration then, to assign 60 Gig or so to RAID1 for both your boot partition and your OS partition. Here is a RAID1 with 4 disks and a spare.
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=4 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 --spare-devices=1 /dev/sdf3
RAID4 is uncommon and not used very much since the parity is only written on one disk. This means that the parity disk is unused on all read operations. So you get better performance from RAID 5 without any benefit. But if you have to do it…you can create a RAID 4 configuration across 3 or more disks, by running:
mdadm --create --verbose /dev/md0 --level=4 --raid-devices=3 /dev/sdb4 /dev/sdc4 /dev/sdd4 --spare-devices=1 /dev/sde4
RAID5 is quite common and widely used when someone desires to maximize space and doesn't care too much about performance. RAID 5 performs striping like RAID 4 but the parity is spread over all disks. This means that all disks are used on read operations. To create a RAID 5 configuration across 3 or more disks, run:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb4 /dev/sdc4 /dev/sdd4
To add a spare drive that will automatically rebuild, run:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb4 /dev/sdc4 /dev/sdd4 --spare-devices=1 /dev/sde4
RAID10 creates mirrored pairs of disks and then makes them into a single volume group. This RAID will always tolerate a single failure and will also tolerate multiple failures provided that a multiple failure does not involve a pair of disks. RAID 10 is useful for volumes that require both speed and redundancy provided you are willing to spend a little more. To create a RAID10 device, run:
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
To add a spare drive that will automatically rebuild when one disk fails, run:
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 --spare-devices=1 /dev/sde3
To remove a RAID member you will need to know the RAID name (/dev/md0 in our example) and the disk member (/dev/sdb2 in our example). You cannot remove a device that is in use. If necessary, dismount any volumes and stop the array. If the array is part of your running OS, reboot into rescue mode in order to rebuild the array.
mdadm --manage /dev/md0 --fail /dev/sdb2 cat /proc/mdstat mdadm --manage /dev/md0 -r /dev/sdb2 cat /proc/mdstat
To add a RAID member you will need a partition of the same size or bigger. You will need to know the array and the device you will add to it.
mdadm --manage /dev/md0 --add /dev/sdb2 cat /proc/mdstat
When you remove a RAID, you also have to purge header information from the partition.
mdadm --stop /dev/md0 mdadm --remove /dev/md0 mdadm --zero-superblock /dev/sdb1 mdadm --zero-superblock /dev/sdb2 cat /proc/mdstat
Arrays that are stopped and need to be started again have their information stored and can be reassembled.
mdadm --assemble --scan cat /proc/mdstat
You can backup your RAID signatures to disk.
mdadm --detail --scan > /etc/mdadm.conf