Difference between revisions of "Software RAID on FC6"

From JimboWiki
Jump to: navigation, search
m
(spell check and comments about onboard raid)
Line 4: Line 4:
 
The document details the steps I had to take to build a software RAID array using Fedora Core 6 Linux. The process was very tedious and I could not find any one document that gave me enough information to complete the project. I will include all parts of the process from selecting hardware and cabling to building the array and setting it to mount automatically. I will assume the reader has a good knowledge of computer hardware and knows at least a little about SCSI and how to run Linux operating systems. For more information about RAID levels see [http://en.wikipedia.org/wiki/Standard_RAID_levels this].  Feel free to e-mail me with specific questions.
 
The document details the steps I had to take to build a software RAID array using Fedora Core 6 Linux. The process was very tedious and I could not find any one document that gave me enough information to complete the project. I will include all parts of the process from selecting hardware and cabling to building the array and setting it to mount automatically. I will assume the reader has a good knowledge of computer hardware and knows at least a little about SCSI and how to run Linux operating systems. For more information about RAID levels see [http://en.wikipedia.org/wiki/Standard_RAID_levels this].  Feel free to e-mail me with specific questions.
  
Linux commands starting with "$" can be run by any non-root user, those begining with "#" must be run as root.
+
Linux commands starting with "$" can be run by any non-root user, those beginning with "#" must be run as root.
  
 
==Creating the Array==
 
==Creating the Array==
Line 13: Line 13:
 
Now, when selecting SCSI drives, go for whatever rotational speed and size spec you feel you need. If you're planning on going for RAID 5, you will need at least (3) drives. I'll discuss RAID types later. Once you select the size and speeds of your drives, it's time to select a connector type. Your three choices are 50-pin, 68-pin, or 80-pin (SCA). I would recommend the 68 or 80-pin configuration for both speed and simplicity. I am using (3) Seagate 181.8GB 7200RPM 80-pin devices.
 
Now, when selecting SCSI drives, go for whatever rotational speed and size spec you feel you need. If you're planning on going for RAID 5, you will need at least (3) drives. I'll discuss RAID types later. Once you select the size and speeds of your drives, it's time to select a connector type. Your three choices are 50-pin, 68-pin, or 80-pin (SCA). I would recommend the 68 or 80-pin configuration for both speed and simplicity. I am using (3) Seagate 181.8GB 7200RPM 80-pin devices.
  
To run your drives you will need a SCSI controller. Generally you can get a simple PCI controller, unless you have special requirements. I bought a very simple Startech.com PCI controller. It is obviously useless to get a controller with onboard RAID, as we are going to use software for the RAID array - so save your money and get the cheap one.
+
To run your drives you will need a SCSI controller. Generally you can get a simple PCI controller, unless you have special requirements. I bought a very simple Startech.com PCI controller. It is obviously useless to get a controller with onboard RAID, as we are going to use software for the RAID array - so save your money and get the cheap one.  Also, most motherboards that have "onboard RAID" do not actually implement RAID using hardware.  Instead, the Windows driver will handle most of the work - effectively making software RAID.
  
 
You will also need the proper cable to run your drives. I'd recommend you get a cable with a terminator to make controller configuration easier. Make sure there are enough or more than enough connectors for all your devices. For 50-pin devices, buy a 50-pin cable. For 68 and 80-pin devices, buy a 68-pin cable. 80-pin devices are meant to be put into a hot-swap tray, so you will also need to get an SCA adapter for each device. The SCA adapter provides all the information that a hot-swap rack would including the device ID. It also gives you the ability to hot swap drives after running a few commands.  A great place to get cabling and SCA adapters is [http://www.stsi.com STSI]
 
You will also need the proper cable to run your drives. I'd recommend you get a cable with a terminator to make controller configuration easier. Make sure there are enough or more than enough connectors for all your devices. For 50-pin devices, buy a 50-pin cable. For 68 and 80-pin devices, buy a 68-pin cable. 80-pin devices are meant to be put into a hot-swap tray, so you will also need to get an SCA adapter for each device. The SCA adapter provides all the information that a hot-swap rack would including the device ID. It also gives you the ability to hot swap drives after running a few commands.  A great place to get cabling and SCA adapters is [http://www.stsi.com STSI]

Revision as of 17:42, 6 November 2007

The document details the steps I had to take to build a software RAID array using Fedora Core 6 Linux. The process was very tedious and I could not find any one document that gave me enough information to complete the project. I will include all parts of the process from selecting hardware and cabling to building the array and setting it to mount automatically. I will assume the reader has a good knowledge of computer hardware and knows at least a little about SCSI and how to run Linux operating systems. For more information about RAID levels see this. Feel free to e-mail me with specific questions.

Linux commands starting with "$" can be run by any non-root user, those beginning with "#" must be run as root.

Creating the Array

Selecting hard drives, controllers, and cabling

You can use IDE drives and still follow these instructions perfectly. In fact, it would be much easier if you did use IDE, but SCSI is more reliable, faster, and just plain cooler. If you're interested in going IDE, I'll assume you know how to select, connect, and cable those.

Now, when selecting SCSI drives, go for whatever rotational speed and size spec you feel you need. If you're planning on going for RAID 5, you will need at least (3) drives. I'll discuss RAID types later. Once you select the size and speeds of your drives, it's time to select a connector type. Your three choices are 50-pin, 68-pin, or 80-pin (SCA). I would recommend the 68 or 80-pin configuration for both speed and simplicity. I am using (3) Seagate 181.8GB 7200RPM 80-pin devices.

To run your drives you will need a SCSI controller. Generally you can get a simple PCI controller, unless you have special requirements. I bought a very simple Startech.com PCI controller. It is obviously useless to get a controller with onboard RAID, as we are going to use software for the RAID array - so save your money and get the cheap one. Also, most motherboards that have "onboard RAID" do not actually implement RAID using hardware. Instead, the Windows driver will handle most of the work - effectively making software RAID.

You will also need the proper cable to run your drives. I'd recommend you get a cable with a terminator to make controller configuration easier. Make sure there are enough or more than enough connectors for all your devices. For 50-pin devices, buy a 50-pin cable. For 68 and 80-pin devices, buy a 68-pin cable. 80-pin devices are meant to be put into a hot-swap tray, so you will also need to get an SCA adapter for each device. The SCA adapter provides all the information that a hot-swap rack would including the device ID. It also gives you the ability to hot swap drives after running a few commands. A great place to get cabling and SCA adapters is STSI

Cabling Up and Powering On

If you're using IDE drives, this doesn't apply to you - just plug them in and go. If you're using a PCI IDE adapter, you might want to check the adapter configuration screen to make sure all the drives were recognized. This information is usually output to the screen during boot, so you might not even have to go into the config.

For the SCSI users, the fun begins. Plug in all your drives, using the SCA adapters if you have 80-pin devices. You will need to set the device id on each drive. This is usually done with some jumpers or switches on the drive unless you're using SCA drives. In that case, you would normally configure them in the hot-swap rack by their position, but if you're like me and too cheap to get one of those, you will set it with a jumper on each SCA adapter. Remember that the controller usually uses an ID of 7, so don't set any drive to that ID. Don't forget to connect the power to each drive!

Once everything is properly cabled, boot up the system. You should see the SCSI adapter card initialize in the boot. Press whatever shortcut key gets you into the device configuration. Poke around in there to make sure all the drives are properly recognized. If they're not, check the jumper settings.

Getting the Operating System to Recognize the New Drives

If you are doing a new install of FC (i.e. you're not adding the array to an existing installation), the installation process should take care of the rest of the process for you. (I say should, because it didn't for me on my most recent install.) If that's the case, simply follow the on-screen instructions and you'll be all set, stop reading now, bye bye, have fun.

If you already have FC installed, you're in for a treat. First thing to do is get your SCSI controller driver running. In my case, the driver is "initio". You'll have to look around and figure out what yours is. Usually, you can guess it by looking at the manufacturer of the chip on the adapter or you can ask for help on a forum or IRC channel. Once you have figured out your driver and installed or compiled it, you should start it by running:

#/sbin/modprobe initio

replacing "initio" with your own driver name. This should automatically load all dependant modules for SCSI operation. Now, check to see if your drives were initialized by running:

#ls /dev/sd*

Hopefully, this will return an item for each of your new drives. If not, there was a problem, try again. Now it's time to add the driver to the kernel by running:

#/sbin/new-kernel-pkg --mkinitrd --depmod --install `uname -r`

This command will update the currently running kernel with the new driver. If you use yum or rpm to upgrade your kernel, this change will apply to any newer kernel you install. If you don't believe me and want to see all the neat things that run when you install a new kernel run:

#rpm -q --scripts kernel

You might want to restart your system at this point to make sure that the proper driver has been installed in the kernel and that your devices are listed without calling modprobe, though it shouldn't make a difference.

Formatting the Drives Properly

Now this is where things get interesting, but not too complex. In order to make software RAID work, we need to make partitions on the devices before the array is built. So, use parted if you like command lines or gpared if you like GUIs to create a partition on each device. I believe you can use any filesystem you like, however I have only used ext3. Make sure you format the devices in the file system that you want to use on the array. All devices in the array should be the same filesystem. See the man page for parted or the homepage for gpared for instructions on formatting drives.

For those of you who have dealt with hardware RAID, this is a very strange thing to do. Usually, you would have simply created an array of the blank drives, but software RAID with mdadm doesn't see drives, only partitions.

Building the Array

At this point, we are ready to build the array. This is fairly simple using mdadm, which is included in most distributions, including FC. The command to build the array looks like:

#/sbin/mdadm --create --level=5 --raid-devices=3 --spare-devices=0 --name=store /dev/md0 /dev/sd[abc]1

You will need to chage the level item to match your desired RAID level. See this if you need a description of RAID levels. If you want spare devices to be automatically controlled by mdadm, note how many devices should be spares. name is the simple name of the array which can be set to anything you like; it is optional. /dev/md0 is the array device; this cannot be changed unless you have already used md0. /dev/sd[abc]1 tells mdadm to use sda1, sdb1, and sdc1 to create the array. Once you run this command, the array will be created. Check the array status by running:

#/sbin/mdadm --detail /dev/md0

The array may be listed as damaged or recreating. This is normal. To create the array for the first time mdadm marks some drives as "bad" to force creation of the array. Monitor the status of the array and wait until it is done building before moving to the next step.

For more information about mdadm, see the man page.

Set Array to be Available On Boot

In order to make the array available on boot, the "FD" flag must be set on each partition. FC will detect the "FD" flags and initialize the array automatically. To set the "FD" flag simply run:

#/sbin/parted /dev/sda set 1 raid on

on each of your devices - substitute /dev/sda with your device.

Create Filesystem on Array

#/sbin/mkfs -t ext3 /dev/md0

Should take care of that. Substitute ext3 for whatever filesystem you want. Make sure it matches what you formatted the drives to in the beginning, or this whole thing might not work.

Set up mdadm.conf

We now need to set up the mdadm.conf file. This is not really required, but makes troubleshooting easier down the road and allows us to have the system send alerts via e-mail. The easiest way to do this is to start with the detail output, like this:

#/sbin/mdadm --detail --scan >> /etc/mdadm.conf

Edit the conf file with your favorite editor (in this case I will use vim):

#vim /etc/mdadm.conf

Review the syntax on the man page and correct the file. You can use "#" for comments. I also recommend that you define a "MAILADDR" for mdadm to send mail to in the case of a failure. My mdadm.conf looks something like:

DEVICE /dev/sd[abc]1
 
# /dev/md0 is known by it's UID.
ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
    
MAILADDR root@somedomain.com

Test mount

Now the array is completely configured and will be setup on boot without issue. Time to mount it:

#mount -t ext3 /dev/md0 /store

Substitute whatever filesystem you chose for "ext3", whatever your mdadm device is for "/dev/md0" (if this is your only mdadm array, it will be "/dev/md0"), and wherever you want to mount to for "/store". Assuming that goes well, you can start using your array for storage.

Add a Line to fstab

Now we should make the array mount on boot so you don't have to run mount every time you want to use the array.

I recommend that you reboot and re-test the mount before adding an entry to fstab - I had a problem where the array didn't initialize so when fstab was run, I was forced into a file recovery mode, which was unpleasant at best. Once you can prove that the array properly initialized on boot, it is safe to add an entry in fstab.

Once you take care of that, time to add to fstab:

#vim /etc/fstab

The line that needs to be added looks like:

/dev/md0                /store                   ext3    defaults        0 0

Substitute your mdadm device for "/dev/md0", where you want it mounted for "/store", and the file system you chose for "ext3". The rest should stay the same unless you know what you're doing.

That's it! Should work.

Managing the array

Haven't written this part yet. It's coming...

This page is incomplete
More work needs to be done on this page, so if something is missing, don't be surprised
1 Managing Array Section is not even started