Log in

View Full Version : RAID and WD EADS HDDs (green series)



x88x
06-13-2009, 08:33 PM
So I'm in the planning stages of rebuilding my fileserver, and since I'm gonna be buying completely new drives I want to get the best drive density possible, so I've decided to go with 1.5TB drives. I used to be a die-hard Seagate fan, but after the 7200.11 generation (and some similar problems I've been having with some 7200.10 generation drives I have) I've decided to go with WD's offerings. Now, I already have two of the 1.5TB EADS drives, and I'm quite happy with them as far as read/write speeds are concerned, but each of them is operating independently (one's the storage drive in my main rig and the other is currently serving as an external). Now, for my fileserver I want to do a RAID5 (or possibly 6, down the road) volume across all the drives, preferably hardware based, but it'll just be a NAS so throughput will never be over a couple gigabits, max.

What I'm wondering is if there's anyone around who has any experience or suggestions concerning RAIDing EADS drives. I've heard they play merry hell with S.M.A.R.T. sensors because of they stop their heads so often, but I haven't been able to find an account of someone who has set up a non-0 RAID with EADS drives.

Also, does anyone have experience with/know if it's possible to do a single hardware-managed RAID volume across multiple controllers? Like, say I had a MBB with dual 16x PCIe slots, I could buy a cheaper controller with fewer channels now, then later when I max it out, get another controller but still continue to grow the volume across the second controller.

Like I said, it's just gonna be a NAS, accessed probably by a max of 2-3 computers at a time, but with quite large files. I'm definitely open to the idea of doing a software RAID, as I assume that would give me a lot more flexibility with driving across multiple controllers, but I haven't had good luck growing a RAID5 software-raid volume in testing in VirtualBox...though it's entirely possible that the problem there is VirtualBox.

thoughts anyone?

si-skyline
06-15-2009, 12:37 AM
the first thing i would say is be careful. and also to know what your doing before you do it. I can tell of course your not wanting to lose data but somethings you ask can make the hole thing go up in the air


Now, for my fileserver I want to do a RAID5 (or possibly 6, down the road) volume across all the drives

Id say know what you want to deploy first before you do it as changing RAID levels can be difficult and will leave you wide open to a hole host of problems with the good possibility of losing the lot.


single hardware-managed RAID volume across multiple controllers

I dunno. Im doubting it as the processing is done on the card. I could be wrong there


get another controller but still continue to grow the volume across the second controller.

on that fact id say if your CPU was up to it and it was only ever going to be a NAS then a software raid would be better.

Also large file = large ram. have plenty in it to keep it running smooth. also if your using XP the disk managment for software raid is quite easy to use.

Knowing what Size you are starting with and want to achive would give a good picture. currently my fileserver at home is running a onboard raid controller with 4 - 500gb disks in a Raid level 1+0 giving me about 1tb of storage, I also have about 4 users who connect to it and its used for media and file serving.

to this day iv been running this setup for about two years and i have not had a read error, disk rebuild or total data loss. it does prove sometimes that alot of cover is just not needed in a personal use

x88x
06-15-2009, 01:59 PM
You make a good point about switching RAID levels; I hadn't really thought about it, but yeah, transitioning from 5 to 6 would be pretty messy (actually, to be honest, I don't know how that would really work).

I am considering a software RAID, both because it would probably be cheaper, and because then if a controller card dies, I won't have to replace it with an identical card. I'm planning on running Debian, but that's open to suggestion. I want it to be a Linux box, and I'm most familiar with Debian and Ubuntu. Like I said, I tried playing around with software RAID in a VirtualBox VM and it failed very oddly, but I'm thinking that might be because of VirtaulBox's virtual disks. I'll be trying it with an extra system I have lying around, and three 500GB drives before I send them off for RMA (just getting old, not actually dying yet) to see if it'll work with that.

The hardware I'm gonna be using will initially be an Asus K8N5X MBB, Athlon 64 FX-55 and 2GB of DDR-400 RAM. If I go with software RAID, I'll be using the onboard SATA controller for now. For HDDs, I'm planning on using 1.5TB WD EADS drives, starting with two in RAID1, then converting to RAID5 when I add a third. I'll be testing that conversion with the 500's, and if it doesn't work very well I might end up just starting off with 3. The main things I need to find out though, before I decide on hardware is how well EADS drives do in RAID5, and a good way to grow encrypted RAID volumes.

The more I think about it, the more attractive software RAID is looking, mainly for the ability to use multiple controllers seamlessly.


EDIT: Yay! I found confirmation that it is possible to grow an encrypted software raid array! I can't do it with Truecrypt, which is what I originally wanted to use (mainly b/c I have used it before), but it is possible. http://ubuntuforums.org/showthread.php?t=1049197

Airbozo
06-15-2009, 06:48 PM
BTW, Yes, you can do RAID (insertanylevelhere) across multiple controllers. One of my customers does this for max throughput. He is using 2 Adaptec 5805's with the drives in raid6... If I remember right he has 48 drives, 24 on each controller using a daisy chain backplane (2 connectors handle 24 drives).

x88x
06-15-2009, 06:55 PM
Interesting, Airbozo. I thought I'd seen mention of it somewhere online. Do you know if it's a specific function of the cards? I would assume the two cards have to talk, so do you know if that's a feature of the card or what? I think I'll be doing software RAID anyways, because it'll let me use a lot cheaper cards (and for this application I don't think it would make a significant difference), but I'd be interested to know how that works.

si-skyline
06-15-2009, 08:18 PM
Hi everyone,

yeah after thinking of it it sounded silly not being able to span over a couple of controllers,


transitioning from 5 to 6 would be pretty messy (actually, to be honest, I don't know how that would really work).

there is several ways and it also depends on your controllers and what they can do. when rebuilding a raid the system is open to a new threat. take your example of raid 5. you rebuild it to add a disk or whatever. the controller is doing that the "security" of the raid is lost and it will be a scary time for a administrator because while in the rebuild process if a read error happens or another disk drops you face losing part or all of the information contained in the raid. and thus the reason for raid 6.

converting the only true way to be sure is to have a back up system in place. in your case it would be tape or go between hard drives but this also have the problem of time consumption and it can create read and write errors in transit.

thats why it far better to deploy what raid level you want first. also in the perfect world it is wrong to deploy a raid system with the intention of short term disk expansion because of the rebuild risks.



I think I'll be doing software RAID anyways, because it'll let me use a lot cheaper cards

true, also software raid is very flexible in moving systems.

normally i would be against software raid because a raid system is not ment to change much in its lifetime and it puts a huge overhead on the rest of the system, but in lower budgets and needing that change is prob best achieved from software .

if you are using linux there was a interesting document i read on the internet what was to mess with the LVM and either a program/service/software to on the fly change from one array to the other.

the problem with doing that crap is if you dont really know what your doing and it crashs at the wrong point or you create a error to screw it up then you can nicely watch it go down the pan.

here is a interesting document of doing a raid 6 in red hat
http://sexysexypenguins.com/2007/05/22/raid-6-fedora-core-6/

x88x
06-23-2009, 01:46 PM
So, after more research, I've discovered that as long as I enable the TLER feature, EADS drives should do fine in a RAID5. I've also confirmed that it is possible to grow LUKS encrypted md RAID volumes, so with that, I think I'll be ordering some more HDDs soon :D

Once I decide what I'll be doing with the case, I'll probably start a worklog for it.

haha49
06-24-2009, 02:11 AM
So I'm in the planning stages of rebuilding my fileserver, and since I'm gonna be buying completely new drives I want to get the best drive density possible, so I've decided to go with 1.5TB drives. I used to be a die-hard Seagate fan, but after the 7200.11 generation (and some similar problems I've been having with some 7200.10 generation drives I have) I've decided to go with WD's offerings. Now, I already have two of the 1.5TB EADS drives, and I'm quite happy with them as far as read/write speeds are concerned, but each of them is operating independently (one's the storage drive in my main rig and the other is currently serving as an external). Now, for my fileserver I want to do a RAID5 (or possibly 6, down the road) volume across all the drives, preferably hardware based, but it'll just be a NAS so throughput will never be over a couple gigabits, max.

What I'm wondering is if there's anyone around who has any experience or suggestions concerning RAIDing EADS drives. I've heard they play merry hell with S.M.A.R.T. sensors because of they stop their heads so often, but I haven't been able to find an account of someone who has set up a non-0 RAID with EADS drives.

Also, does anyone have experience with/know if it's possible to do a single hardware-managed RAID volume across multiple controllers? Like, say I had a MBB with dual 16x PCIe slots, I could buy a cheaper controller with fewer channels now, then later when I max it out, get another controller but still continue to grow the volume across the second controller.

Like I said, it's just gonna be a NAS, accessed probably by a max of 2-3 computers at a time, but with quite large files. I'm definitely open to the idea of doing a software RAID, as I assume that would give me a lot more flexibility with driving across multiple controllers, but I haven't had good luck growing a RAID5 software-raid volume in testing in VirtualBox...though it's entirely possible that the problem there is VirtualBox.

thoughts anyone?

Been looking at hardrives today to put in my rig wd 1tb black 32 cache are good if you dont raid them have problems in raid but the wd re2 or re3 (one is 1tb) is way better for raid and about the same preformance (costs more though) but raid friendly.. if your not raiding drives go wd 1tb black with raid do the re3 or re2 way better that way for raids... Been looking at seek read write speeds all day even looked at the ssd drives and wasnt happy with the size to price speed was good but.. I want 3.5 inch drives not 2.5 and I want size..

x88x
06-24-2009, 02:18 PM
Yeah, I know RE series drives are much better for use in RAID (thus the name), but I want higher data density than 1TB/drive, and I just can't justify spending ~$320 on the 2TB RE4's...so 1.5TB it is, and the only 1.5TB drives on the market atm are the WD Green drives and the Seagate 7200.12's...and after the 7200.11 generation, I've lost a bit of faith in Seagate.

OvRiDe
06-24-2009, 05:05 PM
BTW, Yes, you can do RAID (insertanylevelhere) across multiple controllers. One of my customers does this for max throughput. He is using 2 Adaptec 5805's with the drives in raid6... If I remember right he has 48 drives, 24 on each controller using a daisy chain backplane (2 connectors handle 24 drives).

The Adaptec 5805 is a $600 adapter, so keep in mind that you need to check out all the capabilities of a card if you should choose a less expensive solution. This is just one of those things where you get what you pay for. I know several of the cheaper raid adapters can only handle whats connected to them. If its within your budget.. the 5805 is a very good way to go!

x88x
06-24-2009, 06:29 PM
The Adaptec 5805 is a $600 adapter, so keep in mind that you need to check out all the capabilities of a card if you should choose a less expensive solution. This is just one of those things where you get what you pay for. I know several of the cheaper raid adapters can only handle whats connected to them. If its within your budget.. the 5805 is a very good way to go!

Hmm, good to know, and I'll definitely get something along those lines later on down the road, but for now I think I'm just gonna go with software RAID. Given my current budget, I think it's just the best option for now.

haha49
06-24-2009, 09:21 PM
The Adaptec 5805 is a $600 adapter, so keep in mind that you need to check out all the capabilities of a card if you should choose a less expensive solution. This is just one of those things where you get what you pay for. I know several of the cheaper raid adapters can only handle whats connected to them. If its within your budget.. the 5805 is a very good way to go!

its cheaper to get a new mb my dad resently got one with 8 sata ports and it can do 2 seprate raids for 150.. thats a better deal.. if I was to drop 600 on a raid controller id be building a new pc and saying screw it time for upgrade to top of the line... thats usally how it goes for me every 6 months computer is to slow upgrade :facepalm:

x88x
09-29-2009, 05:27 PM
Hey all, in case anyone was wondering how this went, I ended up going with md software RAID in Linux, and the EADS drives are doing beautifully. I actually also have two Samsung F2's in there as well since they're the sweet spot now, and even mixing drives, everything seems to working just fine.

Right now I'm running:
Celeron E1400 (Core 2 architecture, just slower clock and less L2; fine for what I'm doing)
2GB DDR2-667 PNY RAM (cheapest 2GB DDR2 set at MicroCenter, and I didn't feel like waiting a couple days for NewEgg to deliver)
16GB CF card hooked up through:
CF->laptop IDE
laptop IDE->full IDE
IDE->SATA
yay for combining way too many adapter :D
now the fun part:
mid-range 4-port PCI SIIG SATA card
Debian 5.0 running madam:
md RAID-5 across:
1) 1.5TB Samsung F2
2) 1.5TB WD Green
3) 1.5TB Samsung F2
4) 1.5TB WD Green

Well, actually the fourth drive isn't fully integrated yet...I just got the array built over the weekend, and after shuffling my data around (only had ~3TB :whistler: ), I got the second 1.5TB WD Green hooked up last night and set it to expanding the array. You know you're doing something either very right or very wrong when you start a process and it tells you it has 1800 minutes left. :D

So...that should be done by the time I wake up tomorrow. :P