Software RAID mirror
From MEPIS Documentation Wiki
If you have two disks in your PC, you have the possibility to create a mirror of your home folder. A mirror consists of two equally large partitions on two different physical hard disks. Both partitions hold exactly the same data, and they are seen by the OS as one storage medium. If one disk fails, the other disk will still be operational. In case of disk crashes, your data will still be available.
In this article, I will explain how to convert an existing home partition to a mirror. You can of course also create a mirror before you install MEPIS, and you can even create a mirror for the root partition. There is plenty of info on that elsewhere. However, converting an existing file system to a software mirror has some specific challenges that you may overcome with the help of this article.
Before you start
1. Backup all your data. Although we try to create a mirror and keep your data, things may go wrong! So make that backup.
2. Make sure you have a partition on a second disk that is exactly the size of your home folder. Check with Gparted to check the sizes and resize when necessary. Also make sure the format is the same for both systems. For help on partitioning, see the section on Gparted.
Throughout this article, I will assume the following:
Your home partition is on /dev/sda2.
The mirror partition is on /dev/sdb2.
Change these partition names according to your situation.
Step 1: creating the mirror
We are going to create a mirror of your home partition. To do so, we need to unmount /home. This means you cannot login with a normal user account. The safest thing to do would be to boot MEPIS in runlevel 1. You can also login to KDE as root. Whichever you prefer, you need to work from the console.
The first thing to do is make sure that /home is unmounted. Do that as follows:
umount -l /home
We are now going to use the mdadm tool. If you do man mdadm in a console, you get a very detailed manpage on what mdadm can do. Basically, it is a command line interface to create, manage and monitor disk arrays. Before we do that, we need to load the md module, so that the kernel can do something sensible with the RAID sets.
Once the module has been loaded, we use mdadm to create the mirror:
mdadm --create --auto=yes /dev/md0 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2
The syntax is quite straightforward:
- create an array (--create)
- automatically create the requested device node if it is not present (--auto=yes)
- the raid node is /dev/md0
- the array type is RAID 1 or mirror (--level=1)
- there are 2 raid devices in the array (--raid-devices=2)
- we use /dev/sda2 as the first item in the array and /dev/sdb2 as the second
The mirror will now be created. This will take a while, depending on the system speed and partition size. You can monitor the progress as follows:
watch cat /proc/mdstat
Step 2: resize the new file system
Once the mirror has been created, the data will already be accessible. However, should you want to perform an fsck on the new file system, it will error out. The reason is that the creation of the mirror adds a superblock to the end of the file system. Since there already is a superblock for the original partition, fsck will detect a difference in reported size of the file system and the superblock information, and come up with an error. The trick is to first do:
It will probably give you an error like the following:
The filesystem size (according to the superblock) is 3146724 blocks The physical size of the device is 3146704 blocks Either the superblock or the partition table is likely to be corrupt! Abort?
Do NOT abort, so type 'n' and let it continue (you do have that backup, right?). It should not find any errors. Once it's done, do this:
What this will do, is resize the file system according to the superblock information. Note that we resize the system on the mirror node /dev/md0, and NEVER on the individual partitions that make up the array. Once that has been completed, you could again do an fsck on /dev/md0 and see if it is happy now.
Step 3: adjust file system references
We now have a new device /dev/md0 that should be mounted on /home. However, /etc/fstab will still hold a reference to /dev/sda2. So:
cd /etc nano fstab change /dev/sda2 to /dev/md0 Ctrl-O to save Ctrl-X to end
If by chance there is a static reference to /dev/sdb2 (the second half of the mirror) in /etc/fstab, make sure to eliminate it, else you get a mount error.
Step 4: reconfigure mdadm
The kernel can automagically find and mount raid sets upon boot, but the mdadm package needs to be re-configured to do so. This is done by a simple
The first question regarding the root system can be left to none. The following question whether arrays should be started upon boot, should be answered with yes. Optionally you can run a monthly redundancy check on the integrity. That is up to you.
If all has gone well, you can now reboot and login to your system with all your data intact. I advise to boot up without splashy the first time, to see if any strange errors pop up.
I hope all works well for you.
A RAID 1 does not mean you do not have to do backups anymore. Some things that still can go wrong:
- You manually erase a file
- Your disk controller goes berserk and corrupts the file system on the mirror
- You spill coffee on your machine
- You overwrite an old version of a file and need to recuperate the old one
- Your mail folder is corrupted and your mail program can no longer open it
Read up on the mdadm man page and google around for it. In case you have a disk failure, you should know how to handle it in order to keep running from the degraded array.
A virtual machine with two virtual disks would make for a perfect playground for a dry-run. It'll cost you some time to set it up, but you will be more confident when you do the live exercise.
Warning: Always zero out unused RAID superblocks
If you ever experimented with a RAID array and then decided not to keep it, always zero out unused RAID superblocks using mdadm --zero-superblock! Unused RAID superblocks left anywhere on the harddisk can be extremely deadly. Linux currently does not include enough safeguards against unintentional start of an old unused RAID setup. Various 'automatic' scripts can start such an 'array' and invoke immediately background 'reconstruction' of the array, which can completely rewrite several harddisks at once and cause a catastrophic data loss. This can happen for example when you do forensics after a security incident - you run from some Live CD to play it 'safe', invoke something like /usr/share/mdadm/startall to start /dev/sd[abc]2 RAID5 array in good faith that your action treats disks read-only, and unexpected superblocks left years ago after RAID1 experiment in /dev/sd[abc] will act as a land mine. (In this situation, it is highly advisable to run blockdev --setro /dev/sd[abc] prior to doing anything else.) If you are re-using old harddisks and have the slightest suspicion that they ever contained RAID, verify both whole disks and all the partitions with mdadm --examine.