Update 2012/03/26: There has been some interesting discussion in the comments below regarding whether this process would work with newer version of the linux MD metadata. Please read and be careful!
So I came across an interesting dilemma today; I needed to expand a Linux Software-RAID volume for a client. Now conventional wisdom will tell you that you cannot expand a RAID-1 volume (mirrored disks) and that you need to in fact buy larger drives and create a new volume.
Question: Can I convert a Linux Software RAID-1 array to a RAID-5 array and expand it to include 3 drives instead of the original 2 in the RAID-1 mirror?
“Wait just a second! A 2 disk RAID-5 volume? That doesn’t make sense! Everyone knows you need a minimum of 3 disks to create a RAID-5 volume!” I hear you saying. Whilst technically true, it is possible to create a functioning RAID-5 volume with just two disks. Essentially when you create such an array, what you get is two disks that are mirrored. However, when you look at the logical view of the array, you have two drives that are stripped with parity. The reason that the drives end up mirrored is that when you calculate parity on a 2 disk array, the parity value is the same as the data value in a single stripe. Thus when you change a Linux Software RAID-1 to RAID-5, and the RAID-5 rebuild process begins on the second disk, data is easily calculated from the “parity” value, because they are the same! Brilliant!
So here’s how you actually implement this under a Linux environment. First of all lets make sure the current RAID-1 array is functioning properly.
DISCLAIMER: DO NOT ATTEMPT THIS WITHOUT BACKING UP YOUR DATA FIRST!!! YOU HAVE BEEN WARNED!!!
All the following commands require root access:
root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5] md0 : active raid1 sda1 sdb1 1048512 blocks [2/2] [UU]
The thing to note here is first of all, this is a RAID-1 volume, and secondly, the raid is in optimal condition (the [UU] tells us that both drives are fine. Next, lets stop the array and convert to a RAID-5 volume:
root# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 root# mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sda1 /dev/sdb1 mdadm: /dev/sda1 appears to contain an ext2fs file system size=1048512K mtime=Fri Dec 18 13:23:04 2009 mdadm: /dev/sda1 appears to be part of a raid array: level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009 mdadm: /dev/sdb1 appears to contain an ext2fs file system size=1048512K mtime=Fri Dec 18 13:23:04 2009 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009 Continue creating array? y mdadm: array /dev/md0 started.
If you do a “cat /proc/mdstat” now you’ll see the raid array start to re-cover as a RAID-5:
root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5] md0 : active raid5 sdb1 sda1 1048512 blocks level 5, 64k chunk, algorithm 2 [2/1] [U_] [==>..................] recovery = 12.5% (132096/1048512) finish=0.8min speed=18870K/sec
Once it has finished re-building, we add the third volume, and “grow” the array to encompass all three disks:
root# mdadm --add /dev/md0 /dev/sdc1 mdadm: added /dev/sdc1 root# mdadm --grow /dev/md0 --raid-devices=3 mdadm: Need to backup 128K of critical section.. mdadm: ... critical section passed.
At this point, the array will re-distribute or “re-shape” the current data on the disks. This part can take a substantial amount of time. On 1TB disks, this took around 18 hours to complete. You can continue to use the array, although file performance and re-shaping performance will be significantly degraded. The re-shaping process can be monitored via “cat /proc/mdstat”
root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5] md0 : active raid5 sdc1 sdb1 sda1 1048512 blocks super 0.91 level 5, 64k chunk, algorithm 2 [3/3] [UUU] [==>..................] reshape = 12.5% (131520/1048512) finish=2.5min speed=5978K/sec
Once, completed you should run a file system check and then re-size the file system on the RAID volume to encompass the additional space:
root# e2fsck -f /dev/md0 root# resize2fs /dev/md0 resize2fs 1.41.9 (22-Aug-2009) Resizing the filesystem on /dev/md0 to 524256 (4k) blocks. The filesystem on /dev/md0 is now 524256 blocks long.
So there you have it. You have converted your Linux Software RAID-1 volume to a RAID-5 volume and expanded it’s capacity!