Writing /var/www/docs.clearos.com/data/cache/0/06c98ab03ec8563e0a83bd46a73b17a5.metadata failed

Setup RAID post-install

One of the easiest ways to set up RAID (Redundant Array of Independent Disks) is during the ClearOS install. If you missed your opportunity to do so but left some free space, or have space on other drives you can set up RAID via command line. This guide presupposes that you have ALREADY created your partitions and have them labeled as RAID devices. This video is a great place to get started and this documentation will help you with the rest. Feel free to 1) ask questions on the forum and reference this howto, or 2) Obtain ClearCARE support and issue a ticket.

ClearOS Setting up Storage Volumes with Linux RAID

ClearOS uses Multi-disk to make software RAIDs. These can be coupled with other types of RAID including other technologies operating at the file system level. LVM, for example, is also capable of many RAID styles can could be used, for example, to take several RAID 1 devices and make them into a singular volume called RAID 1+0.

Throughout these examples in this howto, check the manual for greater detail.

Once you finish your RAID layout, you can either format your device with a filesystem or use LVM to get some additional management features and benefit (recommended).

Choosing your RAID level

Multi-disk supports the following modes, including RAID modes:

  • LINEAR md devices (non-striping volumeset, like RAID0 without striping)
  • RAID0 (striping)
  • RAID1 (mirroring)
  • RAID4
  • RAID5
  • RAID6
  • RAID10
  • MULTIPATH (not recommended, use Device Mapper instead)
  • FAULTY (not true RAID, involves one device. Used to inject faults)
  • CONTAINER (used for complex combinations of layered RAID)

The most common RAID types are RAID0, RAID1, RAID5, and RAID6. The following table shows some of the benefits and expected results. For this table, we will use ALL 1 TB disks. In the number row, we list the disks, and below is the capacity. RAID 4, 5, 6, and 10 all provide a level of RAID.

1 - 1TB 1TB* - - - -
1 Y - - - - - -
2 - 2TB 1TB - - - -
2 Y - - - - - -
3 - 3TB 1TB 2TB 2TB - -
3 Y - 1TB - - - -
4 - 4TB 1TB 3TB 3TB 2TB 2TB
4 Y - 1TB 3TB 2TB - -
5 - 5TB 1TB 4TB 4TB 3TB 2TB
5 Y - 1TB 4TB 3TB 2TB 2TB
6 - 6TB 1TB 5TB 5TB 4TB 3TB
6 Y - 1TB 4TB 4TB 3TB 2TB
7 - 7TB 1TB 6TB 6TB 5TB 3TB
7 Y - 1TB 5TB 5TB 4TB 3TB
8 - 8TB 1TB 7TB 7TB 6TB 4TB
8 Y - 1TB 6TB 6TB 5TB 3TB
9 - 9TB 1TB 8TB 8TB 7TB 4TB
9 Y - 1TB 7TB 7TB 6TB 4TB

- Not supported / Not available

* Not redundant / degraded

Viewing RAID Status

Managing the array requires that we should view the status of the array. We can view the status of our array with:

cat /proc/mdstat

If the status shows that it is in progress, you can use the ‘watch’ command to have the command re-issued every 2 seconds:

watch cat /proc/mdstat

RAID Status

The [UU] output is significant. This represents that there are 2 devices in the array and they are both up. If there were 5 devices and it was up, it would say [UUUUU]. If there were 5 devices and one was down, it would say [UUUU_].

For even more detail, run:

mdadm --detail /dev/md0

Because you will use this command throughout your RAID configuration, some admins will open a second shell and then just run the watch in the second window.

Creating a RAID

Various RAID types are available. Some will work directly with the multi-disk array while others use a combination of multi-disk and LVM partitions. Naturally, you will not create arrays of redundancy across the same, single disk. Identify your partitions that are of close size:

This is very important that you use your own partition names here and not the ones listed below

Stripe Set - RAID0

RAID0 is usually preferable to linear modes because you will write and read from ALL disks at the same time. With linear disks, you will only read from a disk if the data is on it. To create a striped volume set that does NOT have redundancy two or more partitions, run similar to the following:

mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb6 /dev/sdc5
mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=3 /dev/sdb7 /dev/sdc8 /dev/sdc9

Mirror - RAID1

RAID1 creates all data on both volumes. It reads from one disk while performing writes to all disks. To create a mirror between two partitions, run:

mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb3 /dev/sdc3

Here is a RAID1 with a spare.

mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb3 /dev/sdc3 --spare-devices=1 /dev/sdd3

Why would you even use RAID1 across more than 2 disk? Well, you may have a partition that you really, really care about. For example, you may have some very large disks and want to have your system ALWAYS bootable even if only 1 disk survives. This would be a great configuration then, to assign 60 Gig or so to RAID1 for both your boot partition and your OS partition. Here is a RAID1 with 4 disks and a spare.

mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=4 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 --spare-devices=1 /dev/sdf3


RAID4 is uncommon and not used very much since the parity is only written on one disk. This means that the parity disk is unused on all read operations. So you get better performance from RAID 5 without any benefit. But if you have to do it…you can create a RAID 4 configuration across 3 or more disks, by running:

mdadm --create --verbose /dev/md0 --level=4 --raid-devices=3 /dev/sdb4 /dev/sdc4 /dev/sdd4 --spare-devices=1 /dev/sde4

Striping with Parity - RAID5

RAID5 is quite common and widely used when someone desires to maximize space and doesn't care too much about performance. RAID 5 performs striping like RAID 4 but the parity is spread over all disks. This means that all disks are used on read operations. To create a RAID 5 configuration across 3 or more disks, run:

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb4 /dev/sdc4 /dev/sdd4

To add a spare drive that will automatically rebuild, run:

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb4 /dev/sdc4 /dev/sdd4 --spare-devices=1 /dev/sde4


RAID10 creates mirrored pairs of disks and then makes them into a single volume group. This RAID will always tolerate a single failure and will also tolerate multiple failures provided that a multiple failure does not involve a pair of disks. RAID 10 is useful for volumes that require both speed and redundancy provided you are willing to spend a little more. To create a RAID10 device, run:

mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3

To add a spare drive that will automatically rebuild when one disk fails, run:

mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 --spare-devices=1 /dev/sde3

Removing RAID Member

To remove a RAID member you will need to know the RAID name (/dev/md0 in our example) and the disk member (/dev/sdb2 in our example). You cannot remove a device that is in use. If necessary, dismount any volumes and stop the array. If the array is part of your running OS, reboot into rescue mode in order to rebuild the array.

mdadm --manage /dev/md0 --fail /dev/sdb2
cat /proc/mdstat
mdadm --manage /dev/md0 -r /dev/sdb2
cat /proc/mdstat

Adding RAID Member(s)

To add a RAID member you will need a partition of the same size or bigger. You will need to know the array and the device you will add to it.

mdadm --manage /dev/md0 --add /dev/sdb2
cat /proc/mdstat

Removing All RAID Members and Devices

When you remove a RAID, you also have to purge header information from the partition.

mdadm --stop /dev/md0
mdadm --remove /dev/md0
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
cat /proc/mdstat

Starting a Stopped Array

Arrays that are stopped and need to be started again have their information stored and can be reassembled.

mdadm --assemble --scan
cat /proc/mdstat

Backup RAID Parameters

You can backup your RAID signatures to disk.

mdadm --detail --scan > /etc/mdadm.conf
content/en_us/kb_howtos_setup_raid_not_during_install.txt · Last modified: 2018/08/28 00:23 by dloper