Please note that LinuxExchange will be shutting down on December 31st, 2016. Visit this thread for additional information and to provide feedback.

Let's assume we have a 3 disk s/w raid-5. Everything is fine, but I poweroff the host, re-arrange the disks in such a way that sda become sdb and sdc become sda. Now if I boot the host, will RAID be still intact.?

The reason why I ask this is :- 1.) I assume when RAID is built, the devices are tagged by Udev, so raid will be still intact 2.) My first question on LE :).

asked 21 Apr '10, 10:50

Prasad's gravatar image

Prasad
485
accept rate: 33%




I believe that the metadata about Linux Raid partitions should coach mdadm about how the partitions fit together in the raid. If you, for example, recable your disks and plug them into different sata ports, Udev and mdadm should be smart enough to use the UUIDs listed in the partitions to reassemble the devices. In fact, I believe Udev tries to find the device serial numbers (or similar information) to place devices back to their original /dev/sd? name. However, I believe that mdadm does not value the /dev/sd? names with as much importance as the UUIDs in the partitions.

If your hard drives do re-arrange if re-cabled, or if they come up in different orders depending on sunspots on the left or right side of the moon and other boot timing issues, you can force device labels by configuring them in /etc/modprobe.d/.

Of course, please don't neglect to back up your data. I've seen many raid volumes fail...easily two or three raid volumes fail a year where I work.

link

answered 11 May '10, 07:04

memnoch_proxy's gravatar image

memnoch_proxy
1413
accept rate: 42%

I got into that situation ... although on RAID1 sets. And it's not testing re-arranging devices but an actual failure; and it proved that RAID sets got correctly paired despite a change in device IDs.

I had 2 RAID1 pairs. One disk on set 1 failed so I had it pulled out because the continuous retries (on a media error) slowed down the system. On power-up and with the faulty HD off the bus, the devices got renumbered but set 2 was still correctly paired and working despite the new device IDs. The system worked fine on a degraded set 1 and users are happy with system response back to normal :)

When the replacement was put in, the devices got renumbered again; the new device was 'hot-added' to the degraded RAID1 set ... set 2 still worked perfectly.

Would RAID5 be any different? I don't think so except that you have to write GRUB/LILO on each RAID member just in case it gets to be the boot drive after device IDs are changed. After doing that, on boot, the system will just look for the correct RAID sets.

link

answered 11 May '10, 19:26

wim's gravatar image

wim
561
accept rate: 50%

This may answer your questions clear:

RAID Level 5

Common Name(s): RAID 5.

Technique(s) Used: Block-level striping with distributed parity.

Description: One of the most popular RAID levels, RAID 5 stripes both data and parity information across three or more drives. It is similar to RAID 4 except that it exchanges the dedicated parity drive for a distributed parity algorithm, writing data and parity blocks across all the drives in the array. This removes the "bottleneck" that the dedicated parity drive represents, improving write performance slightly and allowing somewhat better parallelism in a multiple-transaction environment, though the overhead necessary in dealing with the parity continues to bog down writes. Fault tolerance is maintained by ensuring that the parity information for any given block of data is placed on a drive separate from those used to store the data itself. The performance of a RAID 5 array can be "adjusted" by trying different stripe sizes until one is found that is well-matched to the application being used.

Information obtained from:

http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel5-c.html

link

answered 09 May '10, 19:59

1jnike's gravatar image

1jnike
961
accept rate: 8%

This does not address anything about how Udev or device tagging works.

(11 May '10, 06:55) memnoch_proxy

RAID 5 is really RAID 3 with Parity. ("Parity" basically meaning a backup drive for redundancy. RAID 3 is just 3 drives striped without redundancy. Add another drive for parity (and set it up as RAID 5) and that's what you get, RAID 5.)

RAID 5 has some inherent read/write issues, so I would suggest dual parity, which I believe would be RAID 6 (RAID 6 is RAID 5 + 1 more Parity drive.)

If you need speed, RAID 3, but no redundancy. If you need redundancy, RAID 5 (or better yet 6); but you can also get speed and redundancy with RAOD 0+1 (4 drives in 2 pairs,, each pair striped and then one pair mirrored to the other pair); but that's not as flexible as other RAID setups. (GREAT for a nice desktop machine though, but a bit overkill too I suppose for that.)

Anyway..... as long as you use RAID 0+1, 5,6, 10, 53, you should be fine with mdadm. If you are using hot-swappable SCSI drives, all the better. the RAID should be easy to rebuild with a few keystrokes.

ALWAYS use software RAID and NEVER hardware RAID because if the motherboard or proc goes, your drives and data will become inaccessible under hardware RAID unless you replace the faulty hardware with the exact same make, model, firmware, revision, drivers, blah blah blah etc etc etc.

link

answered 11 May '10, 20:29

Ron's gravatar image

Ron ♦
9361718
accept rate: 13%

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Tags:

×4
×4

Asked: 21 Apr '10, 10:50

Seen: 1,818 times

Last updated: 11 May '10, 20:29

powered by OSQA