View Full Version : SSD mounting?
diluzio91
09-02-2010, 10:38 PM
Im jw, when you buy an SSD, how do you mount it in a tool less 3.5" bay? do they come with adapters?
nevermind1534
09-02-2010, 10:59 PM
Most of them come with 3.5" adapters.
diluzio91
09-03-2010, 01:15 AM
Thanks! im trying to get my primary drive as low as i can so it'll be SSD friendly.. i just havnt had time to migrate steam to my data drive...
interceptor1985
09-03-2010, 05:32 AM
Actually I noticed that the drives with adapters tend to be a bit more expensive then the drive with out, for me personally I would opt out of the mounting hardware and use double sided tape or Velcro and just stick it someplace up and out of the way in the case.
Konrad
09-03-2010, 07:41 AM
I'm a fossil, still haven't really enjoyed the future and gone SSD (except for my laptop).
How hot do these suckers get? Do they benefit from some extra cooling?
I used to have mine mounted with velcro, and that worked fine even if it was a bit ugly. I much prefer the aluminum mounts I have now, but you won't find those in a store. ;)
How hot do these suckers get? Do they benefit from some extra cooling?
They don't, on both questions. They also consume tiny amounts of power (0.2W, iirc was the last one I saw actually listed, I think that was an Intel X25-M).
interceptor1985
09-03-2010, 11:52 PM
I'm sure if you run your computer for long periods of time everyday the power savings after some time might even pay off the drive...depending on what sorta deal you get on it, and your use of it.
Konrad
09-04-2010, 01:54 AM
The main reason I've been holding out so far is that my better computers aren't performance-bottlenecked by vastly less expensive mechanical HDDs. (Their other hardware isn't that great.)
The other reason is that I'm not quite convinced that SSD is truly reliable in the long-term (yet). Flash memory has limited write/rewrite cycles before failure (especially at higher densities), as opposed to the many usual mechanical/platter failures in HDDs. Yeah, I've done the basic research, what I've read shows that the MTBF on SSDs is significantly less than on roughly equivalent HDDs (it's often difficult to make direct comparisons since SSD manufacturers usually don't advertise an MTBF spec, probably precisely because it sucks so much in comparison). There are other considerations, one of them being the superior performance on SSD parts ... so far it just hasn't seemed worth the price difference to me.
Having said that, I've seen surprising speed and battery life improvements on my laptop after an SSD upgrade. Perhaps not that surprising, really, since I've seen the same thing on other portable devices after upgrading their microdrives to flash storage. SSDs are still dramatically improving and becoming cheaper every year while HDDs have already skimmed deeply into the plateau ... next year I'll be able to get a new SSD that's four times as good, or buy this year's one at a quarter of the price.
No doubt SSD system/cache will be standard equipment in my next real computer.
Part of the reason why SSD manufacturers don't advertise an MTBF figure is because they know chips will fail on occasion, and build safeguards into the drive to compensate. I think I read somewhere that the first gen Intel X25-M had something like 15% more storage chips than were actually available because they put so many in for failover when chips would die. That being said, though they do have a read/write limit, for modern MLC chips (the less expensive ones that most manufacturers use now) it's somewhere around 150,000-300,000, depending on the manufacturer. Now, keeping in mind that they have failover chips for when the more frequently used chips die, even if you rewrite your entire drive four times a day, it will take over 100 years for the entire drive to die. Granted you'll probably hit some problems a while before that, but still...most of the consumer HDDs I've used tend to drop significantly off in performance and need to be replaced after a few years anyways.
Konrad
09-04-2010, 03:26 AM
I assume their integrated controllers employ wear-levelling methods of varying sophistication, like all of the better removable flash devices do.
I've seen some product claims of over 2 million write cycles, which I find difficult to accept. I've hardly used SSD but I do have a fair understanding of the memory technologies themselves. Higher density (http://en.wikipedia.org/wiki/Integrated_circuit#ULSI.2C_WSI.2C_SOC_and_3D-IC) flash would probably have something closer to the order of several 100K cycles, lower density flash would be a little slower and a lot bulkier. I imagine that SSDs use some quantity of buffered RAM to minimize incidental write cycles, maybe that's where the vendors are getting their "real world" figures from.
A handful of my most ancient flash devices have already failed over the years.
A few become irrevocably unusable. Most begin to lose blocks (like bad sectors on an HDD). Failed blocks either reduce total storage capacity or become stuck as "read only" media. I don't know if these symptoms/behaviours are an accurate prediction of today's SSD failures because flash technology has dramatically changed over the years. I expect SSDs would also use superior-quality (higher-performance, longer-life) controller parts than the ones used in cheap USB sticks and xSDHC cards (where the cheapass controller dies before the flash does).
I still view SSDs as a good kick in performance. Necessary to make the most of high-spec computers. I wouldn't use them on mission-critical machinery.
I've seen some product claims of over 2 million write cycles
Those would be drives using SLC chips. They're lower density but higher speed and have much longer lives. Good ones do in fact have available write cycles in the millions (usually around 1-2, from what I've seen). This is why whenever you see a company marketing their SSDs to enterprise customers, they almost always use SLC chips.
Konrad
09-04-2010, 03:13 PM
Interesting data.
The failover chips are a fine idea, insurance against diminished storage capacity. I wonder if they're actual discrete chips or just inactive blocks in the normal chips which are reserved for this purpose. Either way, activating new chips to replace failed ones can't bring back lost data, eh?
But I suspect your calculations aren't quite right, sorry. Your example of "rewriting the entire drive four times a day" seems too simplified, I think it's more like "rewriting the blocks on the drive where the file system is stored many hundreds or thousands of times a day" (not quite as severe if wear-levelling moves these blocks around for each write operation, though still pretty bad since changes to the file structure usually also involves simultaneously writing the files themselves elsewhere on the drive as well - exceptions can occur when files are moved and always occur when files are just deleted or renamed ... then again, maybe this overcomplication basically leads to the same results as your logic). RAM buffers in the SSD controller can cache a large number of cumulative changes before committing them all in a single write operation, which no doubt adds a lot of longevity. (Kinda strange, when you consider that RAM buffers in HDD controllers are used to minimize excessive read operations instead.)
Are SSDs (or at least some SSD products, like the SLC versions you mention) already being considered reliable alternatives to HDD storage within the server/enterprise world for mission-critical machines?
The failover chips are a fine idea, insurance against diminished storage capacity. I wonder if they're actual discrete chips or just inactive blocks in the normal chips which are reserved for this purpose.
It depends on the drive. Most manufacturers do inactive blocks, but I think several do whole separate chips for their higher end drives.
Either way, activating new chips to replace failed ones can't bring back lost data, eh?
That's another point where the controller comes into play. With most modern SSDs, no chip or block will ever actually completely fail under normal operations. Instead, when the controller sees that a block is reaching its maximum write operations, it moves the data in that block to an unused block that is not nearing its max write operations.
RAM buffers in the SSD controller can cache a large number of cumulative changes before committing them all in a single write operation, which no doubt adds a lot of longevity.
I believe this is the case, yes. Between the RAM buffers in newer and higher-end SSDs, wear-leveling, and optimizations in modern OSs to minimize write operations in order to be more SSD-friendly, the wear on any one chip is increasingly minimized.
Are SSDs (or at least some SSD products, like the SLC versions you mention) already being considered reliable alternatives to HDD storage within the server/enterprise world for mission-critical machines?
I can't speak for everyone, but I know where I work there are talks to start implementing SSDs in production. But then, we're also in the process of replacing all our tape backups with HDDs, so maybe we're not exactly industry-standard. I know there are at least several SSD manufacturers who cater exclusively to enterprise and military customers. pureSilicon (http://www.puresi.com/) is one, OCZ's 'e' (http://www.ocztechnology.com/products/solid-state-drives/sata-ii/2-5--sata-ii/maximum-performance-enterprise-solid-state-drives/ocz-vertex-2-ex-series-sata-ii-2-5--ssd-.html) designation (http://www.ocztechnology.com/products/solid-state-drives/pci-express/z-drive-r2/slc-enterprise-series/ocz-z-drive-r2-e84-pci-express-ssd.html) drives (http://www.ocztechnology.com/products/solid-state-drives/pci-express/z-drive-r2/slc-enterprise-series/ocz-z-drive-r2-e88-pci-express-ssd.html) are specifically marketed toward enterprise customers (they even have an MLC SAS drive that they're marketing towards enterprise customers (http://www.ocztechnology.com/products/solid-state-drives/sas/ocz-vertex-2-pro-series-sas-6-0-2-5--ssd-.html)). The pureSilicon drives only have an advertised MTBF of 2 million hours, but the enterprise OCZ drives have an advertised MTBF of 10 million hours. Now, obviously neither of these can be empirically proven (even 'just' 2 million hours is over 228 years), but if they're apparently willing to put their reputation on the line for that, and even if they only last 1/100th of that time (still over a decade for the OCZ drives), I'd be happy.
Konrad
09-04-2010, 04:20 PM
I'll do a little reading, but I think you've sold me. The next drive I purchase shall be SSD. That might be a little while since I'm floating around 4-6TB of unused HDD and a couple stacks of writable DVDs.
Powered by vBulletin® Version 4.2.1 Copyright © 2025 vBulletin Solutions, Inc. All rights reserved.