Skip to content
 

Convert Linux Software RAID1 to RAID5

Update 2012/03/26: There has been some interesting discussion in the comments below regarding whether this process would work with newer version of the linux MD metadata.  Please read and be careful!

So I came across an interesting dilemma today;  I needed to expand a Linux Software-RAID volume for a client.  Now conventional wisdom will tell you that you cannot expand a RAID-1 volume (mirrored disks) and that you need to in fact buy larger drives and create a new volume.

Question: Can I convert a Linux Software RAID-1 array to a RAID-5 array and expand it to include 3 drives instead of the original 2 in the RAID-1 mirror?

“Wait just a second!  A 2 disk RAID-5 volume?  That doesn’t make sense! Everyone knows you need a minimum of 3 disks to create a RAID-5 volume!” I hear you saying.  Whilst technically true, it is possible to create a functioning RAID-5 volume with just two disks.  Essentially when you create such an array, what you get is two disks that are mirrored.  However, when you look at the logical view of the array, you have two drives that are stripped with parity.  The reason that the drives end up mirrored is that when you calculate parity on a 2 disk array, the parity value is the same as the data value in a single stripe.  Thus when you change a Linux Software RAID-1 to RAID-5, and the RAID-5 rebuild process begins on the second disk, data is easily calculated from the “parity” value, because they are the same!  Brilliant!

So here’s how you actually implement this under a Linux environment.  First of all lets make sure the current RAID-1 array is functioning properly.

DISCLAIMER: DO NOT ATTEMPT THIS WITHOUT BACKING UP YOUR DATA FIRST!!!  YOU HAVE BEEN WARNED!!!

All the following commands require root access:

root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5]
md0 : active raid1 sda1[0] sdb1[1]
 1048512 blocks [2/2] [UU]

The thing to note here is first of all, this is a RAID-1 volume, and secondly, the raid is in optimal condition (the [UU] tells us that both drives are fine.  Next, lets stop the array and convert to a RAID-5 volume:

root# mdadm --stop /dev/md0 mdadm: stopped /dev/md0
root# mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sda1 /dev/sdb1 mdadm: /dev/sda1 appears to contain an ext2fs file system
 size=1048512K  mtime=Fri Dec 18 13:23:04 2009
mdadm: /dev/sda1 appears to be part of a raid array:
 level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
mdadm: /dev/sdb1 appears to contain an ext2fs file system
 size=1048512K  mtime=Fri Dec 18 13:23:04 2009
mdadm: /dev/sdb1 appears to be part of a raid array:
 level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
Continue creating array? y
mdadm: array /dev/md0 started.

If you do a “cat /proc/mdstat” now you’ll see the raid array start to re-cover as a RAID-5:

root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5]
md0 : active raid5 sdb1[2] sda1[0]
 1048512 blocks level 5, 64k chunk, algorithm 2 [2/1] [U_]
 [==>..................]  recovery = 12.5% (132096/1048512) finish=0.8min speed=18870K/sec

Once it has finished re-building, we add the third volume, and “grow” the array to encompass all three disks:

root# mdadm --add /dev/md0 /dev/sdc1 mdadm: added /dev/sdc1
root# mdadm --grow /dev/md0 --raid-devices=3 mdadm: Need to backup 128K of critical section..
mdadm: ... critical section passed.

At  this point, the array will re-distribute or “re-shape” the current data on the disks.  This part can take a substantial amount of time.  On 1TB disks, this took around 18 hours to complete.  You can continue to use the array, although file performance and re-shaping performance will be significantly degraded.  The re-shaping process can be monitored via “cat /proc/mdstat”

root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5]
md0 : active raid5 sdc1[2] sdb1[1] sda1[0]
 1048512 blocks super 0.91 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
 [==>..................]  reshape = 12.5% (131520/1048512) finish=2.5min speed=5978K/sec

Once, completed you should run a file system check and then re-size the file system on the RAID volume to encompass the additional space:

root# e2fsck -f /dev/md0 root# resize2fs /dev/md0 resize2fs 1.41.9 (22-Aug-2009)
Resizing the filesystem on /dev/md0 to 524256 (4k) blocks.
The filesystem on /dev/md0 is now 524256 blocks long.

So there you have it.  You have converted your Linux Software RAID-1 volume to a RAID-5 volume and expanded it’s capacity!

13 Comments

  1. Liam says:

    Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).

    First, check the existing superblock version with:
    mdadm –detail /dev/md0

    Then, when re-creating the RAID 5 array, make sure you add the –metadata=0.9 tag so the superblock is recreated in the right place. I’ve tested this with v0.9 superblocks, and it works fine.

    Unfortunately, v1.0 give a new size for the md device (smaller than the original array), v1.1 and v1.2 corrupts the filesystem outright, so best to avoid these cases entirely. Creating a new array with v1.x superblocks when the original was v0.9 is likewise outright destructive.

    Otherwise, great guide, I had no idea it could be as simple as this…

  2. James says:

    I’d like to add that I followed these instructions on a raid that was using XFS as the filesystem type and it ruined the array. When the array was recovering from raid 1 to raid 5 it wiped the superblock which hoses XFS. Just a warning out there for anyone else that comes across this. Otherwise great instructions, my issue was specific to the filesystem I was using.

  3. [...] Google points to this guide, but be sure to read the comments as well, and take a backup first. [...]

  4. destructor says:

    I was very glad when I found this guide recently and immediatelly tried it out on in a virtual machine. It worked quite well, until I installed a newer system in the VM an tried this again. So I came back to this article and finally read the first comment warning from using the given approach with newer metadata.

    Since the 1.2 metadata format seems to be the default now (maybe not for all distributions yet?) Following this conversion method may be risky. My suggestion would be (correct me if I’m wrong):

    The intention of this guide is to move from a 2 disk raid1 to a 3 disk raid5, a third disk of the right size will already be available. So a safer approach to convert the raid1 to a raid5 would be:
    - fail one disk of the raid1
    mdadm /dev/md0 –fail /dev/sdb1
    - remove it from the array
    mdadm /dev/md0 –remove /dev/sdb1
    - partition the new disk appropriately. Create a degraded raid5 with the now 2 available disks
    mdadm –create /dev/md1 –level 5 –raid-devices 2 /dev/sdb1 /dev/sdc1
    - now copy all the data from one array to the other in case you have a filesystem on it. (In case you use LVM: add it to the volume group and use pvmove)
    cp -ax /mountpoint_of_md0/* /mountpoint_of_md1/
    - stop the raid1
    mdadm –stop /dev/md0
    - add the last disk to the raid5
    mdadm /dev/md1 –add /dev/sda1
    - and grow the raid5
    mdadm /dev/md1 –grow –raid-devices 3
    - finally you can resize the filesystem or the LVM physical volume…

    The advantage of this solution is, that it works even with the newer metadata. Of course during the whole procedure the array(s) don’t offer redundancy, so in case a disk fails during the procedure the data will be lost, but this also holds for the original approach…

  5. destructor says:

    btw: as a little additional remark:

    The method proposed here in this article is quite an elegant solution, but is is very fragile. The fact that a degraded raid5 with 2 disks is identical to a operational raid1 is of course true, but it is just an implementation detail of the Linux kernel, or mdadm respectively, that it will copy the contents of one disk over to the second one upon creation of the raid5 array. It could just as easily zero out everything on both disks or fill up both disks with (the same) random data. The behaviour upon array creation could even be changed in the future if the current behavious isn’t documented to be stable somewhere.

    So to summarize: While the method mentioned in this article gives an elegant solution to the problem, it seems neither stable nor reliable to work in the future – even if one uses the old 0.9 version metadata format…

  6. Mike says:

    Thank you so much to those who took the time to comment. Incredibly useful and kind of them to take the time to save others from a disaster waiting to happen if they were to implement this guide under the wrong conditions.

    Of course a huge thanks to the original writer of the guide goes without saying.

    If I end up trying destructor’s method I will report back with my results.

  7. robosushi says:

    Hey, I can confirm that destructor’s method works. I just finished converting my 2-disk (each 2 TB) RAID 1 to a 3-disk RAID 5 (again, all 2 TB disks). Took a long time, but worked flawlessly. Thanks so much for the tip!

  8. Wen Zeleznik says:

    Hello! Would you mind if I share your blog with my twitter group? There’s a lot of people that I think would really appreciate your content. Please let me know. Thank you

  9. nael says:

    Hi, Nice article and comments. Thanks!!

    BTW, I found this because I want to convert a RAID-0 to RAID-5…
    I found safer the Destructor solution, but will not work in my case. I think the original should work (can someone confirm this?), but it would be risky.

    As I don’t have enough space for backup everything (just a half), I think the better solution would be to move the files from the actual RAID0 to the new disk for the RAID5 and some free space in the system one.. Then build an empty RAID5 with two disks, and recopy there the distributed files, and once the new disk is free I add it to the RAID…

  10. Bob/Paul says:

    nael: read this – http://neil.brown.name/blog/20090817000931

    Basically, RAID-0 is simply striping across all disks. RAID-4 is striping across the first n-1 disks and parity on the last disk. RAID-5 is normally considered stiping and parity spread across all disks, such that the total parity = 1 disk’s worth.

    Modern mdadm can implement RAID-4 via the RAID-5 driver if you set the layout properly. So take your RAID-0 and load it as a RAID-5 with parity on the last drive (RAID-4 layout) and the last disk failed. Then add your disk and it will put parity on that disk. Then change the layout to a standard RAID-5 layout.

    Since you can’t backup your data, you will want to test this in a VM to be sure you’re doing it correctly.

  11. Sean says:

    First off, thanks for the article it helped me get more storage out of my old 2-disk nas box after adding a 3rd disk via USB, and had I read Liam’s comment I could have done it without having to do a restore from a backup!

    Would you consider adding Liam’s comment into the text of your article? I read through the entire article, then performed all the steps exactly as described, only to find my data was gone. Only then did I look down into the comments, and noticed the 0.9 version issue. Sure enough, my version was set to 1.3, so I am sure this is what caused my filesystem to disappear. No big issue, restore from the backup, lesson learned!

  12. Tma says:

    Very elegant solution, and I am going to try it on a system RAID. I have /boot and / in RADI1, and I will change / on RAID5 following this guide. /boot have to remain in RAID1 as per documentation. The only problem would be with e2fsck step, which isn’t possible on a mounted filesystem, so I planned to reboot in recovery mode to do that step.

    I am a bit scared by “destructor” warning, but on the other hand his solution is more difficult to implement for a / partition, because I should enable the boot from /dev/md1 at some point, while this is not necessary for the original solution.

    I will report if this is going to work, in the meantime, thanks guys, very precious recommendations are here.

Leave a Reply