Im jw, when you buy an SSD, how do you mount it in a tool less 3.5" bay? do they come with adapters?
Printable View
Im jw, when you buy an SSD, how do you mount it in a tool less 3.5" bay? do they come with adapters?
Most of them come with 3.5" adapters.
Thanks! im trying to get my primary drive as low as i can so it'll be SSD friendly.. i just havnt had time to migrate steam to my data drive...
Actually I noticed that the drives with adapters tend to be a bit more expensive then the drive with out, for me personally I would opt out of the mounting hardware and use double sided tape or Velcro and just stick it someplace up and out of the way in the case.
I'm a fossil, still haven't really enjoyed the future and gone SSD (except for my laptop).
How hot do these suckers get? Do they benefit from some extra cooling?
I used to have mine mounted with velcro, and that worked fine even if it was a bit ugly. I much prefer the aluminum mounts I have now, but you won't find those in a store. ;)
They don't, on both questions. They also consume tiny amounts of power (0.2W, iirc was the last one I saw actually listed, I think that was an Intel X25-M).
I'm sure if you run your computer for long periods of time everyday the power savings after some time might even pay off the drive...depending on what sorta deal you get on it, and your use of it.
The main reason I've been holding out so far is that my better computers aren't performance-bottlenecked by vastly less expensive mechanical HDDs. (Their other hardware isn't that great.)
The other reason is that I'm not quite convinced that SSD is truly reliable in the long-term (yet). Flash memory has limited write/rewrite cycles before failure (especially at higher densities), as opposed to the many usual mechanical/platter failures in HDDs. Yeah, I've done the basic research, what I've read shows that the MTBF on SSDs is significantly less than on roughly equivalent HDDs (it's often difficult to make direct comparisons since SSD manufacturers usually don't advertise an MTBF spec, probably precisely because it sucks so much in comparison). There are other considerations, one of them being the superior performance on SSD parts ... so far it just hasn't seemed worth the price difference to me.
Having said that, I've seen surprising speed and battery life improvements on my laptop after an SSD upgrade. Perhaps not that surprising, really, since I've seen the same thing on other portable devices after upgrading their microdrives to flash storage. SSDs are still dramatically improving and becoming cheaper every year while HDDs have already skimmed deeply into the plateau ... next year I'll be able to get a new SSD that's four times as good, or buy this year's one at a quarter of the price.
No doubt SSD system/cache will be standard equipment in my next real computer.
Part of the reason why SSD manufacturers don't advertise an MTBF figure is because they know chips will fail on occasion, and build safeguards into the drive to compensate. I think I read somewhere that the first gen Intel X25-M had something like 15% more storage chips than were actually available because they put so many in for failover when chips would die. That being said, though they do have a read/write limit, for modern MLC chips (the less expensive ones that most manufacturers use now) it's somewhere around 150,000-300,000, depending on the manufacturer. Now, keeping in mind that they have failover chips for when the more frequently used chips die, even if you rewrite your entire drive four times a day, it will take over 100 years for the entire drive to die. Granted you'll probably hit some problems a while before that, but still...most of the consumer HDDs I've used tend to drop significantly off in performance and need to be replaced after a few years anyways.
I assume their integrated controllers employ wear-levelling methods of varying sophistication, like all of the better removable flash devices do.
I've seen some product claims of over 2 million write cycles, which I find difficult to accept. I've hardly used SSD but I do have a fair understanding of the memory technologies themselves. Higher density flash would probably have something closer to the order of several 100K cycles, lower density flash would be a little slower and a lot bulkier. I imagine that SSDs use some quantity of buffered RAM to minimize incidental write cycles, maybe that's where the vendors are getting their "real world" figures from.
A handful of my most ancient flash devices have already failed over the years.
A few become irrevocably unusable. Most begin to lose blocks (like bad sectors on an HDD). Failed blocks either reduce total storage capacity or become stuck as "read only" media. I don't know if these symptoms/behaviours are an accurate prediction of today's SSD failures because flash technology has dramatically changed over the years. I expect SSDs would also use superior-quality (higher-performance, longer-life) controller parts than the ones used in cheap USB sticks and xSDHC cards (where the cheapass controller dies before the flash does).
I still view SSDs as a good kick in performance. Necessary to make the most of high-spec computers. I wouldn't use them on mission-critical machinery.