PDA

View Full Version : FreeNAS questions



d_stilgar
02-04-2011, 08:56 PM
I'm wanting to do a NAS box. I have quite a bit of data on my computer and I'd like to get some back up as well as having more space.

I'm looking at this controller card (http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157&Tpk=N82E16816117157) but I don't get how sas works really. I get a cable to hook up devices and plug it into the card. The card says it can handle up to 244 devices, but the only sas to sata cables I'm finding only have 4 sata plugs on them. I'm looking to get around 40 drives (hypothetically). What would I do then?

DrkSide
02-04-2011, 09:13 PM
I am not sure on the extra devices. As I understood it 4 devices went on each port and 2 ports meant 8 devices with a sas to 4 sata fanout cable.

This card only supports raid 0/1/1e/10e. I think you may be better off looking for a raid 6 card (assuming 1-2tb drives) or raid 5 card with anything less.

x88x
02-04-2011, 09:57 PM
I am not sure on the extra devices. As I understood it 4 devices went on each port and 2 ports meant 8 devices with a sas to 4 sata fanout cable.

This card only supports raid 0/1/1e/10e. I think you may be better off looking for a raid 6 card (assuming 1-2tb drives) or raid 5 card with anything less.

/\Both of these, actually. The product brief from Intel (PDF (http://www.intel.com/Assets/PDF/prodbrief/RAID_SASUC8I_prodbrief.pdf)) describes the card a bit better than the newegg page. SAS, or Serial-Attached-SCSI (gotta love these nested acronyms :P ), is electrically compatible with SATA drives, but is capable of other features with actual SAS drives. One of those other features is a controlling a SAN (think big box of HDDs with slave controllers), which is where the 'up to 122 physical devices supported with SAS mode' comes from. Each of the SFF-8087 connectors could hook to at least one SAN, which would then sub-manage lots of individual drives. Without messing with that though ($$$$), you can have the controller manage drives directly. In that case, each SFF-8087 port can control up to 4 devices with an adapter cable.

Also, as DrkSide touched on, for an array that big you really want to get into RAID 5,6,10,01,50,etc. Personally, unless it's a really mission critical thing, RAID 5 or 6 is as far as I would go. It's pretty rare that you'll have more than one (RAID5) or two (RAID6) drives fail at a time.

What is the purpose of this array? I ask because if you want to control 40 drives with dedicated controllers, in a single array, that is going to costs BIG money. You'll either have to get cards that can talk to each other or get a SAN or some repeaters (note, repeaters aren't commonly used for reliability reasons, and as such they're not very readily available). If you go that route, if you buy them new, I would expect to pay at least $1,000 for the controllers and related hardware (ie, not the HDDs or computer) If the array is just for network storage, might I offer a rather controversial (among certain circles) alternative? Software RAID. *cringes* ..you still there? Ok, hear me out.

Benefits of hardware RAID:
- MUCH higher possible speed. All the RAID management is handled by the controller, so none of the management data goes past the card. This limits the number of components that data has to go through and speeds up the process a lot.
- No load on the CPU.
- Faster, sometimes smarter, fail-over management.
- Boot drive can almost always be a RAID volume.

Problems with hardware RAID:
- MONEY. Good controllers get expensive fast, especially once you get above 24 devices and you need to either bridge cards or use a SAN or repeater in order to get everything in one RAID volume.
- Management usually has to be done in the RAID controller BIOS. These tend to vary from piss-poor to acceptable, in terms of usability.
- Drives have to be matched quite closely because the controller works on the lowest level. So, even if you have two 1TB drives, but from different manufacturers, one might have a different number of blocks and you might run into problems.
- Depending on what controller(s) you use, if the controller dies, your array may be useless unless you get that exact same controller (worst case). It will at least probably require a similar controller, probably using a similar chipset, and you might have to tell it the array specs (block size, etc) in order for the new card to work with your array.

Benefits of software RAID:
- CHEAP! You can use any controllers you want, and as many controllers as you want.
- You don't have to match drives as closely. In fact, you can do the array on the partition level (ie, make an array of partitions instead of drives). This is what I do with my fileserver at home since I have drives from multiple manufacturers of slightly different capacities (even though they're all "1.5TB").
- Flexible. Using the Linux mdadm system as an example (since that's what I use at home). My system crashed a while ago, after I had made my array. I ended up having to reinstall the OS. However, once I installed mdadm again, all I had to do was start it up and it automatically found the drives, determined that they were an mdadm array, what the details of that array were, what order the drives needed to be addressed in, everything. Then it went ahead and made a logical device pointing to that array.
- More flexiblity. If you go with a JBOD "array" instead of a true RAID array, you have even more flexibility in what drives you can use and how recoverable the data is. Personally I don't like JBOD though. I like having things neat and orderly. :D
- Configurable in the OS, often through pretty, GUI-fied tools. Personally I use the CLI for managing my mdadm array, but that's because I run a headless server and only ever log into it through SSH.

Problems with software RAID:
- Slower. The CPU handles all the array management data, so all that data has to go through the controller, over the bus, through memory, into the CPU, and back out all the way. That being said, if it's just network storage, once you get a few drives in the array the gigabit NIC quickly becomes your bottleneck, even with slow drives.
- Usually, the OS cannot be installed on a RAID volume. Personally I like running my OS off a small, separate drive anyway (4GB CF card atm :P ), though, and I would recommend it for you as well, with that large an array.

..I can't really think of any more cons for software RAID at the moment..not to say there aren't any others, but for a home fileserver application I can't think of any.

Like I said earlier, I would not recommend software RAID for anything where speed is an absolute necessity, and I would not recommend it for a mission-critical application. However, for a home fileserver that is only ever accessed over a network, it is awesome.

DrkSide
02-04-2011, 10:23 PM
Stilgar
Have you thought about something like windows home server. It is what I run for my backup and storage duties. I would get the version 1 not "Vail" (just released as RTM) because vail does not include drive extender.

Drive extender is a built in software raid, kindof. It basically lets you take any drive and add it to a pool. The os takes care of duplication across multiple drives and ensures that your files are at least on two hard drives should one fail. The best part is if you loose a drive you can then hook up the working one to another computer and all of your data is accessible.

Another bonus, and the reason why I chose Home Server, is the backup functionality built in. It can backup up to 10 clients (windows xp, vista, windows 7) automatically. This does a bare metal backup which means if my hard drive goes out in my media center I simply put a new one in and pop in the restore cd. It then pulls the information over the network and restores the drive to the last backup (or another that you specify).

It also creates a 20GB partition that contains the OS for reinstall without messing with any data. It has many, many plugins available as well for different functions and includes built in remote connectivity. So I can pull files off it from any internet connected device.

I'm kindof a fan but I think it might be worth looking into.

x88x
02-04-2011, 11:10 PM
Drive extender is a built in software raid, kindof. It basically lets you take any drive and add it to a pool. The os takes care of duplication across multiple drives and ensures that your files are at least on two hard drives should one fail.


To elaborate, Drive Extender is an implementation of JBOD (or, Just a Bunch Of Disks ..seriously :D). The way JBOD works is sort of a step up in abstraction from software RAID. It formats each individual drive normally, so if you pull any one out you can connect it to another computer it functions just like a normal drive. Then, in a basic JBOD it just starts at the beginning of the first drive, fills it up, then moves to the next one. Windows Home Server has a function to let you choose certain parts of the drive to mirror across other drives, providing you with some security in the case of a single drive failure (idk if you can make it mirror selections across more than two drives, though that would be a cool feature..probably wouldn't get used that much though..), so in that case the JBOD software handles shuffling data around to make sure the data you want mirrored is actually on more than one drive. Personally I prefer RAID because I have more control over what happens, and the array is more neat and tidy, but if you have, well, just a buch of disks :P of varying capacities, JBOD is a great way to get a single, user-friendly logical volume out of them.

EDIT:
I've heard a couple different things about Drive Extender in the new version of WHS; one that they took it out, one that they left it in but made it so that the drives are formatted in a proprietary format...which kinda defeats half the purpose of JBOD... I forget which one they ended up doing.

d_stilgar
02-04-2011, 11:56 PM
Well, I'm probably not going to do this for a while since I don't have tons of money, but the idea would be to (as inexpensively as possible) get 38 2tb drives controlled by one computer. I would want some sort of raid so that they appear as one to six drives on the computer, but other than that I don't really care if it's hardware or software raid. It would be nice to be able to hot swap drives slowly over time so I can replace 2tb drives with 3tb drives, but I'm less worried about that.

The problem is finding inexpensive controller cards with lots of ports. I get that the more expensive cards are a lot nicer, but I really just want the storage access. As long as the speed isn't horribly slow I'm fine.

I guess I should give away why I want 38 drives. This should do it.
http://images.wikia.com/2001/images/2/2f/2001halshutdown.jpg

For now it's just a pipe dream, but it would be nice to be able to do some CAD drawings and at least have an idea of how I would execute a build like that if I had the money. I don't have $2500 for 38 2Tb drives, and I don't have the money to get everything else I would need either.

Thanks for the help though. I will, eventually, someday, probably in a few years, actually do this, so any pipe dream help you guys can give is appreciated.

x88x
02-05-2011, 12:56 AM
Ah, ok...I was wondering why exactly 38. :P

In that case, I would strongly recommend software RAID. It will give you the most flexibility in terms of controllers and upgrading (you know, when you need more than 70TiB of storage :P ). As far as upgrading one drive at a time, I'm not sure you could really do that with a proper RAID, so you might want to go with a JBOD array if that's a high priority.

If I were building it right now, 38 drives on a single machine, buying new equipment, I would do this:

MSI 890FXA-GD70 : $199.99 (http://www.newegg.com/Product/Product.aspx?Item=N82E16813130274)
4x SuperMicro AOC-SASLP-MV8 : 4*$109.99 == $879.92 (http://www.newegg.com/Product/Product.aspx?Item=N82E16816101358)
The 6 onboard SATA, plus the four 8-port boards would let you control 38 drives. Then, run the OS drive off the 1 remaining onboard port.

Really though, if I were doing it on a budget, I would look around for an old server system with a crapton of PCI slots, then pick up a bunch of used 4-port PCI SATA controllers (which new will start around $30, but you can probably pick up a lot cheaper). For example, this lot of 25x 8-port PCI-X SATA-150 cards for $250 (http://cgi.ebay.com/Adaptec-SATA-2610SA-Raid-Controller-Cards-Lot-25-/300522118176?pt=LH_DefaultDomain_0&hash=item45f8839c20). You would need at least 5 PCI-X slots on the MBB, but that's not uncommon with old server boards. Granted, you would be hammering the crap out of that poor 64-bit PCI bus, but hey. :P

..of course, if you're already blowing $3,000 on the drives.... *shrugs* Either way, if this ever gets built I will love to see it happen. :twisted: Heck, if it's not for a few years, we might have 5 or even 10TB drives (depending on how well certain techs work out)! That would really make this thing a beast. :twisted:

d_stilgar
02-05-2011, 04:12 AM
Heck, if it's not for a few years, we might have 5 or even 10TB drives (depending on how well certain techs work out)! That would really make this thing a beast. :twisted:

That's what I keep thinking. The longer I wait the better the drives will be. I always use this site (http://forre.st/storage) to find which drives are getting me the most Gb per dollar. No matter when I do this I'm going to make sure I get the most per dollar. I'm not going for performance so it makes sense, and if I'm spending lots of money anyway, I'd like to get the most per dollar, even if that means a much higher budget.

That said, I could see myself building a case to hold everything without buying all the drives. Maybe if it got enough attention I could get a sponsor. I would even take the finished mod around to a bunch of events or loan it out to travel. It would be worth the $2500 in free hard drives for sure.

mDust
02-05-2011, 09:32 AM
Keep an eye on http://diskcompare.com/ too. I've seen quite a few deals on there.