PDA

View Full Version : SSD controllers vs RAID 0 performance



Konrad
02-13-2015, 03:23 PM
I've been looking at maximizing Read performance for SSD RAID data drives. Write performance is also good, and also important, but Read is where it's at for me. Impressive Sequential Read would serve me well, compressible/incompressible data may or may not be a factor (dunno yet), small-cluster stuff (4K Reads, etc) won't have much impact on my application since the OS and apps will be running off a RAM drive and separate SSD. Features like TRIM and being bootable are always nice to have, even if not actually making much use of them, but not worth paying huge premiums or sacrificing everything else.

I've been checking out products like the G.SKILL Phoenix Blade PCIe SSD (http://gskill.com/en/product/fm-pcx8g2r4-480g)*, Plextor M6e PCIe SSD (http://www.plextoramericas.com/index.php?option=com_content&view=category&layout=blog&id=76&Itemid=204), Asus/ROG RAIDR Express PCIe SSD (http://www.asus.com/ca-en/Storage_Optical_Drives/RAIDR_Express_PCIe_SSD/)**, OCZ RevoDrive 350 PCIe SSD (http://ocz.com/consumer/revodrive-350-pcie-ssd). Somewhat expensive, but not so bad when you consider that each is basically 2-4 SSDs embedded onto a PCIe card along with a dedicated RAID 0 controller (their serious enterprise-/server-grade big brothers are where things get really expensive). A pair or trio or quad of decent SSDs in RAID 0 can nearly approach PCIe SSD performances at perhaps a bit less cost, eating a little more power and some PCH/SATA ports instead of a PCIe card slot. I am mildly dismayed that one cannot (yet) buy cards which are basically just a PCIe-bus RAID controller you clip a few discrete 2.5" SSDs into, seems like a great idea to me, better than having banks of solder-mounted flash wear down over time until the entire device fails.

* I think this is the one I like best. Incidentally, I think it is actually a CoreRise "Comay" BladeDrive (http://www.corerise.com/en/product_show.php?id=95), equipped with G.SKILL-modified CoreRise firmware, beautiful SKHynix flash, and a festively decorative heatsink with a pretty bird. If only it had a black PCB, lol.
** I like this one, too. Slower performance ("only" two embedded SSDs) but packed with all the usual robustly overengineered (and overpriced) Asus/ROG niceties. I expect reliability and longevity should be about as good as anything in this price range can get.
(I do choose products based primarily on specs and performances, not looks. But I really don't mind when they look good. Besides, there must be a way to dye PCBs black, yes?)

I like to emphasize reliability, and not sure how these things measure up. Few, if any, seem to implement much in the way of fault tolerance or any sort of emergency power failure/recovery mechanism (http://www.storagesearch.com/ssd-power-going-down.html) (yeah, I guess that's part of what's included in the price tag on the enterprise-level equivalents). And, being a RAID 0 array, the statistical chance of cumulative drive failure/corruption and complete data loss (as low as it might be) does run significantly higher.

It seems that LSI/SandForce controllers are infamous for achieving their performance by encoding on-the-fly data compression, meaning that throughput of incompressible data is substantially slower. And they are known to leverage TRIM quite poorly, although apparently this has largely been addressed in recent firmware. I eagerly await upcoming products (yay! (http://hothardware.com/news/Kingston-HyperX-Predator-PCI-Express-SSD-Unveiled--With-LSI-Sandforce-SF3700-Flash-Controller)) which will use the latest & greatest SF3700 controllers.

Controllers by Marvell, Samsung, and Intel seem largely interchangeable, at least when compared against their counterparts within each performance tier. The latest generation Intel controller appears to promise greatest long-term reliability, albeit they are invariably paired with Micron (IMFT) flash which appears less robust than flash made by Hynix, Toshiba, or even Samsung. To me, the product warranty period is an excellent measure for expected long-term reliability.

And, of course, the performance of the flash/NVRAM components themselves, wedded with the fundamental limits of their controllers, complex design/engineering choices (http://www.storagesearch.com/ssd-symmetries.html), astrological conjunctions, and other almost-incalculable variables all contribute to each product's overall performance and reliability characteristics. Performance benchmarks tell all, and are easily obtained.

Longevity benchmarks for SSDs (or even just SSD controllers) are elusive. Very little data exists for newer products.

I expect that SSD-type devices must include some small amount of volatile RAM for caching purposes. It seems to me that havings "lots" of cache could greatly reduce latencies and increase longevity (assuming the presence of a good power-fail protection circuit). RAM caches are simply ignored in SSD product specs, never talked about by SSD users ... am I overestimating its value?

And now another confusing option: NV flash "drives" built into DIMMs (http://www.storagesearch.com/hybrid-dimms.html). Damn, wtf, seriously?

(And, yeah, I realize my linkies all point to the same site. I have indeed read many others, this one just happens to present topic-related info in a cogent fashion which doesn't involve reading through disconnected forums and wikis.)

But, uh ... SSD RAID 0 ... ahem lol:

Got any advice? Recommendations? Good info I could look up?

Konrad
02-13-2015, 04:01 PM
Um, a little more ...

I understand RAID 0 "stripes" data across two (or more) drives to gain performance. I understand that a lot of this performance gain comes from the fact that once a drive receives data packets it takes a little bit of time to actually Read/Write the packet block (latencies from interfacing, addressing, reading/writing, error correction, running firmwares, even from the flash controller logic itself), so each drive in the striping array alternates being available for more orders while other drives in the array are still "busy" carrying out the ones they have. More drives in the array just add more multiples to overall performance gains (to a point, since increased logic/timing overheads create diminishing returns).

I assume the OS/software just throughputs data packets to the PCH/SATA controller firmware, which in turn hands it over to the RAID controller where the which-data-is-found-on-which-drive logic, the "physical" data location in the logical drive array, is actually parsed. The drives themselves each contain controllers and firmware and wear-leveling logic and other complexities which makes it impractical (if not impossible) to have any control over just which data gets put exactly where. (Each SSD is actually composed of multiple memory chips and segments and controllers and already implements numerous parallel memory operations in a RAID-like manner anyhow. But this is embedded logic, hardwired into the architecture and essentially impossible to reconfigure.)

But it seems to me that situations may often exist where the data load becomes imbalanced through random-access asymmetric usage over time, where a lot more of some particular data is divided onto one drive than onto any other. Accessing such data cannot take full advantage of the RAID striping strategy; one drive may happen to work overtime while the other(s) sit idle, total performance gains are then suboptimal. The Best Case Scenario (lulz) would be a perfect division of data load across every drive in the array, worst case would be a perfect indivision of data load onto a single drive, and in practice it will likely fall somewhere in between (though perhaps closer to the best case extreme, given the many layers of "smart" logic involved).

But is there any way to gain finer control of this through software? Would even a simple defrag sort of "clean things up" a bit whenever a substantial (performance-impacting) imbalance exists?

I can see that an extra trip up and down all the hardware-firmware-software layers would probably reduce and even reverse total performance gains. But it may be worth the effort to Write "reference" data (data which is never/rarely rewritten) onto the array in such a way that one can then Read it from the drives (many, many times) at maximum possible speeds. I'm thinking webservers which just play and replay the same large blocks of data to endlessly demanding crowds.

Airbozo
02-13-2015, 08:59 PM
Look at the Intel 730 SSD's. These are the enterprise grade Intel 3500's with overclocked controller and NAND then re-branded as the 730 series. I have 2 of the 240gb's striped and they still cost less than one 480 gb models. I saw the performance numbers last year at the Intel Solution Summit, but they were strictly confidential. Might be allowed to post them now, so I will check. They are fast and since they are based on the enterprise class drives, supposedly reliable too. I have them in my new gaming rig and I can run benchmarks on them. Let me know what tests you want to see and I'll post them up. I also have an 800gb S3500 model I can compare.

My company is actually working with someone out of San Diego for us to provide a 48 bay 2U server with 48 800GB S3500 SSD's running across 3 dual port raid controllers. I am not allowed to publish numbers, but that drive sub-system is FAST! Connected into the network using 2 teamed 40gb NIC's as well.

Looks like they published some of the numbers here:

https://www-ssl.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-730-series.html

x88x
02-14-2015, 02:05 PM
@Konrad:

So, outside a few very small niche applications, I'm going to venture that you're not going to see any actual performance difference.

Regarding the "multiple drives on one card", there are some cards (http://www.addonics.com/category/msata_adapter.php) that let you do that with mSATA drives. The future of that space really lies with M.2 drives, though, but they're still in the formative years.

As for reliability, I would separate that into three categories: 1) failures outside the host (ex, power/etc), 2) failures inside the host, and 3) actual drive failures.

For non-enterprise-class use-cases, I would say that category 1 can safely be addressed by a quality UPS and control software to safely shut down the host in the case of power failure. If this is a serious concern, the only thing you can really do to address that is to either introduce battery-backed controllers or SSDs that actually have protection for such things. The only SSD models that I know of that can actually survive a mid-write power failure without corrupting data are a few enterprise-class Intel models.

Category 2 is a bit fuzzier. Because you could conceivably have, say, PSU failure, power failure becomes a concern in critical applications.

Regarding category 3, the company I work for has probably the largest install base of SSDs in the world. Now, obviously I can't release any details, and the drives I would be buying for my personal use are not the same ones we use there, but...let's just say that knowing what I do, as long as you're looking at modern, quality, SSDs, I really don't worry about any of my personal SSDs failing anymore. Depending on your use-case, it may become something that you have to at least safeguard against, but suffice to say, SSDs across the industry have proven to be phenomenally more reliable than even enterprise-class HDDs.

Long story short, if this is for your personal use, I wouldn't worry about it. If this is for an enterprise-class use, it is probably something you should think about, but I would err on the side of addressing the issue on a higher level (application architecture) rather than on the hardware. Any time you are entirely reliant on the reliability of a single host for the integrity of your data in an enterprise-class environment, you are bound to have data loss at some point.

Konrad
02-15-2015, 05:06 PM
A pair (or more?) of Intel 730 (http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-730-series.html) or Samsung 850EVO (http://www.samsung.com/global/business/semiconductor/minisite/SSD/global/html/ssd850evo/overview.html) SSDs are indeed the other option I'm considering. They appear to be the best balance of performance and reliability available within the sub-kilobucks price range. Not so much into the silly skulls, but what can you do, eh? The 730s are consistently recommended by you enterprise-IT folks, even though critical specs for their new Intel 3G SSD controller are downright vague and proprietary. Not that it really matters, and I guess results speak for themselves, but I'm usually that difficult nerd who wants hard specs, who'd rather buy possibly-inferior-but-fully-documented (and perhaps fixable-moddable-hackable) tech than possibly-superior-but-secretive-and-tamperproof tech.

Agreed, longterm data reliability of SSDs now appears approximately equivalent to reliability of HDDs. Nothing lasts forever, "you are bound to have data loss at some point", and that's why smart people regularly run backups. I'll be happy to just get drives which offer killa performance and will last long enough (say, 3-5 years?) that I won't need to waste time and money on incremental replacements before it's time to install complete upgrades.

8 extra direct-to-CPU PCIe3.0 lanes after plugging in 16/16 SLI, and what else to do with them? A third GPU (in 16/16/8 SLI) would add very little to accelerated computing and not a lot to already-ridiculous fps, a dedicated PhysX card would cost the same and do even less (for what I do, anyhow). I also have 8 unused PCIe2.0 lanes, but alas, these link through the PCH which then links to the processor via DMI2.0 - bandwidth already heavily saturated by several other SATA3 (and perhaps a few USB3.0) devices. I am always looking at ways to kill bottlenecks in my particular applications ... which is, uh, actually quite a bit of everything lol. A PCIe-based high-performance storage solution appears worthy of consideration. Cooling complications (and workarounds or upgrades) may occur if I block airflow with too many stuffed mobo slots. My own damned fault for trying to consolidate much of what I now do across many computers onto just a single overpowered multi-purpose platform. But gotta make more room in my den/office/workshop/library/lab/hideout/mancave.

Those PCIe/mSATA controller cards look very much like a workable idea! They're even kinda cheap. Although the mSATA SSDs themselves look a little bit costly, at least for the Intel 730 or Samsung 850EVO. Overall cost and performance expectations are comparable to the PCIe SSD cards I've been looking at - with advantages of modularity and replaceable (or upgradeable) discrete drive components. Hell, even the controller card itself could be upgraded at low cost when something way better turns up. Now I just gotta figure out mSATA form-factors.

x88x
02-16-2015, 02:07 AM
Now I just gotta figure out mSATA form-factors.
There is one: mSATA (http://en.wikipedia.org/wiki/Serial_ATA#Mini-SATA_.28mSATA.29). Not much to figure out, really. It's SATA over a mPCIe physical connector (not electrically compatible), and I have only ever been able to find them in the one card size.

Where it gets complicated/exciting is if you step over to M.2/NGFF (http://en.wikipedia.org/wiki/M.2). There you could be looking at any of a dozen or so physical envelopes, any of three different connectors, and data communications over USB, SATA, PCIe x2, PCIe x4, or a several other possible interfaces not applicable to storage devices. Annoying/confusing, no? It gets exciting, though, because as the platform is maturing, it is enabling very interesting things...like the Samsung SM951 (http://www.anandtech.com/show/8865/samsung-launches-sm951-m2-pcie-30-x4-ssd-for-oemssis), which talks PCIe 3.0 x4 and has claimed sequential read/write speeds of 2,150MBps and 1,550MBps respectively and claimed random read/write IOPS of 130K and 85K respectively. Not as significant a difference in random IOPS, but the sequential ops blow any SATA devices out of the water.

For more fun looking at the future of M.2, check out NVMe (http://en.wikipedia.org/wiki/NVM_Express).

Konrad
02-16-2015, 10:34 PM
I admittedly know little about mSATA and SATA Express and Thunderbolt, aside from what I've read on wikis, some half-enlightening reviews, and the promise of very exciting data transfer speeds. But otherwise I haven't yet paid it much attention.

My newest mobo happens to support each of these non-standard standards anyhow, although in ways which force some limitations on the rest of the PCIe/SATA/USB configuration. An obvious downside is that I would rely on a single high-speed storage device which might offer lower performance/reliability than an SSD RAID setup for higher cost. Not off the board entirely, though, new products with new capabilities are always being churned out.

These are likely the next generation of high-speed data devices, sure. But currently a bunch of evolving (and competing) de-facto standards being driven more by market response to popular products than by the careful foresight of clever engineering groups. This is one arena I haven't jumped into as an early adopter, mostly since traditional alternatives already meet all my needs without worry of being caught buying into a dead-end technology path.

And in truth, I have seen very few consumer devices able to actually saturate SATA3 or PCIe3.0 or USB3.0 bandwidths, most are barely able to even approach the maximum performance thresholds of the previous generation interfaces. Outside of niche and benchmark-optimized usages, anyhow.

NVMe has already caught my eye. But so far it's been a bit out of reach and a bit too unproven to grab my attention, the enthusiast desktop consumer platform is not really where the fun toys are at.

Konrad
02-17-2015, 01:54 AM
Let's carry things one step further into the realm of ridiculous niches.

Hardware RAM drive devices for cache/paging/swap/autosave/temp junk. lol, specifically as a hardware workaround for bad software insistence on bloating out to occupy maximum possible cache space. I do realize that a good OS will run as much as it can first in physical RAM, only extending virtual RAM when physical RAM resources are insufficient. Windows likes to gobble as much drive space as it can, and do inconsequential little things in it with alarming frequency, and it gets decidedly annoying with throttled features and popup complaints when manually overridden away from Microsoft's automated "smart" defaults. I do more and more of my computing in Mint/Xfce, but there are still some things (like games) I just gotta do in Windows, and many of those things (like games) place high performance demands on system resources.

I make great use of RAM drive software; I let the stupid Windows OS keep 16GB (even 8GB should be more than plenty enough, I would think) and allocate the other 16GB as, say, my "game folder". Pointless trying to page virtual memory into a software RAM drive, I've learned that Windows gets cranky with the deception anyhow, and the entire logic of the idea is head-shakingly counterproductive. But Windows still greedily demands virtual RAM, all of it, it claims as many open hooks and handles as it can sink on the gamble that it'll stay available until graceful shutdown. I can let it cache onto an HDD with a slightly-noisy performance bottleneck, let it cache onto the boot SSD with suboptimal performance, or just let it cache onto an alternate SSD (as expected, I guess) while I swear at Microsoft's insistence on needlessly impacting performance with wear-thrashing my flash media's lifespan. You'd think they would learn from customers (mostly gamers, probably) complaining about this particular issue over a decade across several core OS releases. I feel assured that, outside of this artificial OS limitation, my system hardware is more than sufficient for the task.

It's stupid to have to buy hardware specifically to workaround an arbitrary limit built into software. But the benchmarks and reviews about hardware RAM drives have some surprisingly positive results, all based around the fact that volatile RAM has vastly lower access/read/write latencies, most especially when dealing with random non-sequential small-packet stuff (such as, say, multiqueue <4KB operations which NVRAM tends to handle quite poorly). I do understand what the hardware RAM drive can do, and how, and have none of the common misconceptions about it expanding raw memory capacities or being able to run more software more faster more everything. It's just a way to optimize certain operations, primarily caching/paging activities, which otherwise introduce small-but-varying degrees of performance loss.

It's a niche market, and a poorly understood one outside of professional computer nerds. So there aren't all that many products of this kind out there - like the Gigabyte i-RAM (http://www.gigabyte.com/products/product-page.aspx?pid=2180#ov), ACARD ANS-9010 (http://www.acard.com/english/fb0101.jsp?&ino=28), and ALLONE Cloud RAMDisk Turbo Storage Drive 101 (http://www.all1.com.tw/download/Cloud%20Disk%20Drive%20101%20RAMDisk%20.pdf). I note that products of this type are all either obsolete (use old, slow interfaces) or are Big Ticket Items for Big Ticket Customers (and still actually use old, slow interfaces). How hard can it be to find, say, a PCIe3.0 card or SATA3 drive-bay box which basically mounts a few banks of DDR3 DIMMs and a discrete "drive" controller for less than $$$$-$$$$$? Not the best price-per-GB solution, most people would just advise buying more GB of physical RAM, but surely not impossible?

Note that the Big Server Crowd, which can afford and can make good use of such devices, is usually obsessed about issues like potential data loss within unpowered volatile storage. So these devices typically provide some sort of emergency backup/restore method or even real-time mirroring of RAM contents onto nonvolatile media. Me, I could care less about having my cache/page/swap/temp junk flushed away, it would even save me the chore of cleaning/tuning it myself. Hell, I could even argue it enhances data security, were I a tinfoil-hat-wearing type of guy (which I'm not, mostly).

x88x
02-17-2015, 11:34 AM
My advice for dealing with virtual memory on your desktop? Just turn it off. No need to mess around with special niche hardware and whatnot, just turn the silly thing off.

Airbozo
02-17-2015, 12:35 PM
NVMe is being pushed by Intel and several other major players (Hitatchi, HGST, etc) and will be THE performance king as far as I can tell. I attended an Intel conference in January where we got to compare some NMVe drives with the Intel S3500's. The NMVe drives were right about 10 times faster according to IOMeter. Real world numbers might be different (depending on how the IOMeter tests were setup). They compete with the PCIe based flash cards on performance and reliability. Expect more info and number soon. We've actually asked for one of the NVMe drives to test side by side with and HGST (Virident) Flash card and other SSD's.

We will also try to run some comparisons on the IBM Flash SAN (or whatever the name is). My co-worker is actually pretty knowledgeable on the IBM Flash system since he sold and installed a ton of them before IBM bought TMS. Them got hired away from IBM Since he knew so much about the product. He came back last years because he couldn't stand working for IBM... I am slowly coming up to speed on the whole flash market as we sell and integrate them in to customer centers and products.

Konrad
02-18-2015, 07:16 PM
lol, nobody I've met has professed enjoyment from working for IBM. The reasons given all seem to boil down a Byzantine megacorporate heirarchy bloated with deadweight/busybody apparatchniks obsessed with preserving and following the mighty paper trails the institution started way back in the 19th century. Don't know all the details myself, hardly even want to anyhow, I have no direct experience with IBM outside of some formative-computer-science reading classics and a workstation/laptop consumer's recurring disappointment with their bureaucratically misbegotten OEM product lines.

IBM was once a world-leading technological powerhouse. Actually, more than once, it has roared and rampaged several times with unstoppable pioneering momentum. But now IBM seems to be a huge corporate dinosaur which (surprisingly) has defied being dragged to extinction by a world filled with smaller, faster, aggressive, far more efficient and adaptable packs of competitors. I suppose IBM simply survives by wallowing deep within lucrative glacial patent pools and lush jungles of legal-contractual exclusivity.

Airbozo
02-18-2015, 08:06 PM
Hehe, Wordy today?

To be honest, IBM has a knack for making things work. Software and technology from different companies and industries. They offed their PC business and pissed off the feds and banks. Then they offed their low to mid range server divisions and pissed off the feds and banks. Then there was talk of offing their low end storage division and the feds and banks asked for contact information from their US based competitors. They suddenly realized that they need to keep hold of the high end servers and storage divisions or lose any hardware customers and become a software and services only company (which to be honest is where the real money is). When they bought Texas Memory Systems they made one of the best moves possible in the flash memory world. Only WD (or HGST once China signs off on the deal) made a comparable move by buying Virident. The rest of the players in that space are having serious issues with inferior products (Violin memory systems? or Fusion IO (which was bought by SnDisk)) to make any real headway in the market.

IBM has been around for a very long time and will most likely be around for some time to come because they know how to hire/buy the right talent at the right time to get the work done. Maybe it's the size of the dinosaur that prevents its complete demise.

I've done work with them in the past and will do more in the future. My company has been targeted to be the western region top tier seller of the flash systems due to the co-worker mentioned above and they support us any way they can to close the deal even if we compete directly with them (which has happened twice already and we won).

I would however, never work for them. I like to be mostly in charge of my own destiny and that rarely happens at IBM.

Konrad
02-25-2015, 03:39 AM
Here's an interesting SSD feature (http://www.angelbird.com/en/news/uninterruptible-power-supply-in-solid-state-drives-49/) I found. A little google followup shows that many enterprise SSDs incorporate similar looking capacitors and there's a few patents (held by Samsung, mostly) with an eye towards making external capacitor/UPS cages for internal SSDs. Very cool.

My question about this - does the SSD have enough onboard intelligence to realize there's main power failure and all suspended/cached data needs to be written immediately, before backup power also fails? It would seem senseless to stuff a fat capacitor onboard only to delay inevitable corruption of data/firmware for a few seconds or minutes.

In fact, is there any external signal (aside from total power loss, of course) which can instruct an SSD to immediately halt, dump data, and shutdown? If so, I could wire up an inline SATA power/data module, not unlike a portable USB charge pack. Easy enough to construct something similar to these cute little Mini-Box micro-UPS (http://www.mini-box.com/OpenUPS?sc=8&category=1264) and Pico-PSU (http://www.mini-box.com/picoPSU-80) products (albeit, something simpler and more useful for this application). But again, this is all utterly useless unless some external way exists to send a shutdown signal to the drive logic.

Btw - I went with twin 512GB Samsung 850 Pro SSDs. Close call vs Intel 730 counterparts, but they were cheaper (on sale) and just can't beat the confidence behind a 10-year warranty. Also like the RAPID Magician DRAM caching/acceleration software, although now I constantly worry about sudden power loss (permanent damage to drive flash/firmware, not so much the time wasted restoring temporary data loss). And I never much liked that gamey skull emblem anyhow. (Gotta admit the brushed aluminum Angelfire drives look really stunning, too bad they perform worse and cost more than these Samsungs, I wouldna mind a pair if they were dirt cheap.)

x88x
02-25-2015, 09:51 PM
My question about this - does the SSD have enough onboard intelligence to realize there's main power failure and all suspended/cached data needs to be written immediately, before backup power also fails? It would seem senseless to stuff a fat capacitor onboard only to delay inevitable corruption of data/firmware for a few seconds or minutes.

I do not claim to be an expert, but I think it operates on similar principles to battery-backed HDD controllers, but extended to the entire drive rather than just the controller. That is, it keeps the lights on long enough to flush the write cache (which was going to happen anyway), and then it doesn't matter. Think of it like water flowing into a bucket, from which it it pumped into a holding tank. When power shuts off, the water stops coming into the bucket, but the pump keeps running; once the bucket is empty there's nothing to worry about. Keep in mind, we're talking about milliseconds here...

Konrad
02-26-2015, 12:24 PM
Ah, my (uninformed) understanding is that SSDs tend to generally keep writes pending until there's enough data to complete flash blocks. Constantly writing partial blocks (mostly puny incidental temp garbage, at that) would substantially increase wear when multiplied over time. Meaning that at any given moment, there will be a whole bunch of unwritten data (suspended in multiple write caches) at risk of being lost or corrupted.

I suppose the capacitor idea is workable enough for preventing permanent damage to the drive itself, even if data is at risk. (And again, I'm not too concerned by data loss, restoring/reinstalling from a backup image is fast anyhows on SSDs, not much of a hassle unless it needs to be done with great frequency.) I don't really know SSDs, per se, but I do know about how nonvolatile memory technologies work - it's imperfect, it can (and eventually will) spontaneously fail along some complex statistical curve, but permanently wrecked blocks are most often caused by sudden power failure events which interrupt write/rewrite/erase operations - and this particular problem would be easily addressed by a capacitor/battery discharging a few precious milliseconds of emergency uptime.

I could find no mention of any such power failsafe in consumer level PCIe SSDs, sadly. Not even in the overengineered ROG RAIDR or the OEM-repurposed server-grade CoreRise/G.Skill Phoenix cards. I assume higher-grade (and higher-cost) enterprise gear addresses this problem, I would also assume such data-critical stuff always runs in the presence of emergency UPS batteries or other power redundancies.

x88x
02-26-2015, 02:17 PM
Ah, my (uninformed) understanding is that SSDs tend to generally keep writes pending until there's enough data to complete flash blocks.

Hmm, that may be, idk. Sounds like something that would useful, at least.