Setting up software RAID in linux
Introduction
Setting up software RAID in Linux is very easy, it is also possible to simulate a disk failure and verify that the array is working. The installation described here was made on ArchLinux but apart from installation command the instructions should be the same for any Linux distribution.
Installing Software RAID manangement tool
Install the linux md tool in ArchLinux:
$ sudo pacman -S mdadm
Configuring RAID array
I have four disks sdb
, sdc
, sdd
, and sde
which I want to add to a RAID5 array.
$ cat /proc/partitions
major minor #blocks name
8 0 8388608 sda
8 1 8387584 sda1
8 16 524288 sdb
8 32 524288 sdc
8 48 524288 sdd
8 64 524288 sde
In this example my disks are four 512MB disks (this example is actually from a virtual machine). RAID5 means that size for the array will be the sum of all minus 1 of the disks. I.e. in this case with four 0.5GB disks the size of the array will be 1.5GB. The non available space is used to store checksums for data which makes it possible to recreate data from a failed disk. In RAID5 it is possible to recover from one failed disk. RAID6 can recover from two failed disks, but RAID6 will also use 2 disks that are unavailable for storage on the array.
Creating a RAID5 array
$ sudo mdadm --create /dev/md0 --chunk=32K --level=raid5 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Use the device
Create filesystem and mount
$ sudo mkfs.ext4 /dev/md0
$ sudo mount /dev/md0 /media/raid
Checking the raid array progress
Check the status of the RAID array:
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
1569792 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
[UUUU]
shows the status of each disk in the array, all disks are UP
.
A degraded array would show _
for the faulty drive.
To get more details on the raid array:
$ sudo mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue May 19 22:19:54 2020
Raid Level : raid5
Array Size : 1569792 (1533.00 MiB 1607.47 MB)
Used Dev Size : 523264 (511.00 MiB 535.82 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue May 19 22:20:01 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 32K
Consistency Policy : resync
Name : arch:0 (local to host arch)
UUID : 35300d92:56f766ae:d945154a:daa7456e
Events : 18
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
4 8 64 3 active sync /dev/sde
Testing the RAID array
Simulate a faulty drive:
$ sudo mdadm --manage --set-faulty /dev/md0 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
The drive /dev/sdb
is now faulty but the array can still be used.
To recover the faulty drive remove the drive from the raid
$ sudo mdadm /dev/md0 -r /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md0
Add drive to the raid array again
$ sudo mdadm /dev/md0 -a /dev/sdb
mdadm: added /dev/sdb
The array will now start to rebuild. The rebuild process can be monitored on
/proc/mdstat
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[6] sde[4] sdd[5] sdc[1]
1569792 blocks super 1.2 level 5, 32k chunk, algorithm 2 [4/3] [_UUU]
[========>............] recovery = 44.5% (233692/523264) finish=0.0min speed=233692K/sec
After the progress bar there is an estimate on how much time is left for the rebuild process, in this case the disks are so small so the rebuild process is really fast. If the disks are large and the system is in use with some disk IO load and CPU load the process can take quite a lot of time.
To see what is happening one can monitor disk io and cpu load during rebuild
$ iostat -k 5
avg-cpu: %user %nice %system %iowait %steal %idle
0,21 0,00 49,06 2,49 0,00 48,23
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
md0 0,00 0,00 0,00 0,00 0 0 0
sda 0,40 0,00 3,20 0,00 0 16 0
sdb 142,80 30,70 84257,50 0,00 153 421287 0
sdc 137,20 84793,60 1,40 0,00 423968 7 0
sdd 137,00 84792,80 1,40 0,00 423964 7 0
sde 137,00 84792,80 1,40 0,00 423964 7 0
scd0 0,00 0,00 0,00 0,00 0 0 0
As can be seen on the disk IO, data is read from sdc/sdd/sde and written to sdb. This system was completly unused during the rebuild process.