225

For anyone who's serious about storage performance, SSDs are always the fastest solution. However, WD still makes their 10,000 RPM VelociRaptor hard drives, and a few enthusiasts even use enterprise-grade 15,000 RPM SAS hard drives.

Aside from cost, is there still a reason to choose a 10,000 RPM (or faster) hard drive over an SSD?

Answers should reflect specific expertise, not mere opinion, and I'm not asking for a hardware recommendation.

bwDraco
  • 45,747
  • 43
  • 165
  • 205
  • 4
    Even cheapo desktop motherboards support multi-tier storage, using an SSD to cache one or more spinning disks. Random-read should be better on a 10k HDD than a SSD-cached 7k2 HDD, since random-read will generally miss the cache a lot. Besides that, I can't think of any other reasons. – Mark K Cowan Nov 02 '14 at 18:47
  • 8
    Not all workloads are ramdom, think about CCTV setup so that the 20 streams are written so that. C1 is on B1, B21, B 41 etc hence no ramdom access in normal useage. – Ian Ringrose Nov 03 '14 at 19:47
  • 2
    @IanRingrose has a point. You can build a very large RAID array (ton of up-to-6TB 3.5" drives) with lots of streaming I/O capacity out of HDDs, like a http://aws.amazon.com/ec2/instance-types/#HS1 -- some applications like analytics databases (think Amazon Redshift) or genomic sequencing do a ton of I/O and need a ton of space but it's all streaming, and a big spinning-disk array is perfect. (With enough drives, 10K is still unnecessary, though: 100MB/s/"regular" drive * lots of drives will still max out the I/O interface, or you'll hit other bottlenecks.) – twotwotwo Nov 05 '14 at 05:28
  • 2
    Another way of spinning (ha) this: for your desktop, the price of a 256GB SSD is a fraction of the whole system's cost and the performance difference is huge; for a 48TB RAID array for an analytics database, the cost difference is bigger and there's less performance difference because it's mostly sequential access. Again, though, I'm really talking about whether regular HDDs (7.2K RPM) still have a niche in high-performance applications at all, not whether 10K RPM VelociRaptors are a good deal. For your desktop, I'd say def. not. – twotwotwo Nov 05 '14 at 17:16
  • 1
    Can't add this as as answer, so would just say that there's an article on The Register - "Why solid-state disks are winning the argument" (http://www.theregister.co.uk/2014/11/07/storage_ssds/) that covers the issues and (ignoring costs) finishes by saying "so long as you follow the instructions on the tin when selecting the right SSD for the job, there is absolutely no reason not to buy one." Of course, there's quite a discussion in the comments about some of the issues that may not have been addressed, but I felt it worth mentioning here. – Gwyn Evans Nov 08 '14 at 22:43
  • @DragonLord : What about a 30KRPM drive *(yes they do exist)*??? – user2284570 Feb 12 '15 at 12:41

9 Answers9

180

enter image description here

This is a velociraptor. As you may notice, it's a 1tb, 2.5 inch drive inside a massive heatsink meant to cool it down. In essence, it's an 'overclocked' 2.5 inch drive. You end up having the worst of all worlds. It's not as fast at random reads/writes as an SSD in many cases, it doesn't match the storage density of a 3.5 inch drive (which goes up to 3-4 tb on consumer drives, and there's 6 tb and bigger enterprise drives).

An SSD would run cooler, have better random access speeds, and probably have better performance, especially where the equivalent SSD, while costlier, is likely to be a higher end one, and SSDs generally have better speeds as they get bigger.

A normal HDD would also run cooler, have better storage density (With the same 1tb space fitting into a 2.5 inch slot easily), and cost per mb/gb would be lower. You might also have the option of running these as a raid array to make up for the performance deficiencies.

The comments also indicate that these hard drives are loud in general - SSDs have no moving parts (so, they are silent in normal operation), and my 7200 RPM drives seem quiet enough. Its something worth considering when building a system for personal use.

Taking all this into account, with a sensible planned upgrade path, and endurance tests demolishing the myth that SSDs die early, I wouldn't think so. The thinking enthusiast would use an SSD for boot, OS and software, and a regular spinning hard drive for bulk storage, rather than picking something that tries to do everything, but doesn't do it quite as well, or cheaply.

As an aside, in many cases, 10K RPM enterprise drives are getting replaced by SSDs, especially for things like databases.

Journeyman Geek
  • 127,463
  • 52
  • 260
  • 430
  • 6
    Thank you for posting the endurance test link. I am so tired of everyone being afraid to use a SSD for fear it will wear out. Now I can point them to that. – Keltari Nov 01 '14 at 05:15
  • 8
    Thats a pretty big reason people sometimes go for a SSD over a HDD. Then again, all storage dies eventually, and if it matters to you, you ought to back it up. To me the big deciding factors *ought* to be price/gb and storage density, and these drives kinda suck on either count. – Journeyman Geek Nov 01 '14 at 05:19
  • @Keltari all storage has a tendency to die unexpectedly. The *only* way how SSD wear-out is relevant is when you do cost-of-ownership calculations, i.e., when you plan to replace HDDs every x months at x'$/year and SSDs every y months at y'/year. – Peteris Nov 01 '14 at 10:07
  • 4
    Well, I disagree. I have a 600 GB VelociRaptor and never regretted buying it. It’s not really loud and it’s not really that hot. The heatsink is only there to ensure proper operation in builds that lack ventilation. There’s nothing “overclocked” to it, most 10K HDDs are 2.5″. It’s also available without the heatsink, by the way. – Daniel B Nov 01 '14 at 18:10
  • 62
    @PeterHorvath the answer specifically states `cost per mb/gb would be lower` with a hard disk, and an SSD `while costlier`... the answer clearly addresses the fact that hard drives are cheaper per megabyte than SSDs. I don't think anyone in the IT sector at the time this question was asked would debate that. The final nail in the coffin is the question itself: `Aside from cost, is there still a reason...` –  Nov 02 '14 at 04:48
  • 1
    Another link you may want to add alongside the SSD endurance test one: [Are SSD drives as reliable as mechanical drives (2013)? on ServerFault](http://serverfault.com/q/507521/58408). – user Nov 02 '14 at 09:16
  • 2
    This answer is ignorant because it totally ignores the fact that this Velociraptor is avaialble in a 2.5" form factor. Yes, you will need to look up the part number at the WD site and find an enterprise reseller, but that is not an excuse for this ignorance. SSD are better - but I have a lot of real 2.5" Raptors even with 5 year warranty. – TomTom Nov 02 '14 at 18:08
  • Even in that case the *best* you get is the same storage density as a regular 2.5 inch disk with the need for better cooling than the average home desktop. Besides, I wrote this primarily based off enthusiast reviews - otherwise, I'd need to take into account *proper* 15K rpm and/or nearline drives as well. Unfortunately I don't have much experience with these. – Journeyman Geek Nov 03 '14 at 02:35
  • Part of my problem is physical capacity. Quantity of storage per rack and controller is a factor. (OK, so partly that's cost too, but ...) – Sobrique Nov 03 '14 at 15:05
  • 1
    The cost ratio of ssd to hdd with 10k rpm appears to be about 2.5:1. That really isn't terribly high, considering the benefits afforded to us by using ssd. – corsiKa Nov 03 '14 at 16:03
  • Don't forget about hybrids. A hybrid drive, e.g. [this Seagate one](http://www.amazon.com/Seagate-Solid-Hybrid-2-5-Inch-ST1000LM014/dp/B00B99JUBQ) could be a good choice on a budget (it can outperform the 10K RPM drives in many general-use situations, and provides 1TB for $85). – Jason C Nov 03 '14 at 23:00
  • I used to think these things were awesome, they break down pretty bad, performance goes out the window, wear and tear etc. Also, they oxidize really easily. My house wasn't even damp, very clean/dry and yet somehow the very little amount of moisture in the air managed to break down the casing pretty badly. Just FYI –  Nov 05 '14 at 05:03
  • TechReport also has a SSD endurance test running. Here is the latest update: http://techreport.com/review/27062/the-ssd-endurance-experiment-only-two-remain-after-1-5pb – Brad Patton Nov 05 '14 at 20:03
  • 1
    OS for boot? Most useless use for a SSD. I do boot from one, however, SSDs shine when running a huge Outlook account, a MySQL database or compiling hundreds of thousands of files of code. SSDs don't add much or anything for badly optimized game level loading, OS boots and many other scenarios (20 seconds faster PER DAY, useless). – oxygen Nov 05 '14 at 20:14
  • The Raptor disks are also really really noisy. Especially when it spins up after a rest. – papirtiger Nov 06 '14 at 02:34
  • 5
    I'm confused by the structure of this answer. "This is a velociraptor" does not answer the question directly, and neither do the next three paragraphs. It needs a TL;DR at the top. – Eldritch Conundrum Nov 06 '14 at 12:17
  • I'm not sure what's the issue with that. The *image* of the velociraptor provides context for when I talk about storage density and heat. – Journeyman Geek Nov 07 '14 at 03:38
  • 2
    Except that the heatsink is useless. It is done to make the raptr look cool and fit ina 3.5" slot. The 2.5" raptor has no heatsink and does not get hotter ;) You make a point about the "massive heatsink" which is utter marketing crap. – TomTom Jan 14 '15 at 16:34
  • As an aside, whilst SSD's are faster, I have never had cause to replace a velociraptor - and some of these have been installed for 7 years, They have effectively lasted the life of their hosts, and yes they chatter. (my kids have them in their PC's too as I needed to do something with the "spares"). My biggest problem with SSD's is short production life, so little chance of a like-for-like replacement. (Just try and buy a PATA example sometime). – mckenzm Jul 22 '15 at 16:27
  • SSDs are a lot cheaper now, you can pick up a 1TB SSD for $120-150 USD, compared to the $250 USD 1TB Velociraptor. – rgajrawala Feb 26 '19 at 19:41
74

Not sure these justify picking a hard drive over a NAND-Flash SSD, but they are certainly areas that a 10,000 rpm hard drive would offer benefits over one.

  1. Write amplification. Hard drives can directly over-write a sector, but NAND-Flash SSDs cannot overwrite a page. The entire block must be erased, and then the page can be re-used. If there is other data in the block's other pages, it must be moved to a different block, before the erase.

    A common block size is 512KiB, and a common page size is 4KiB. So if you write 4KiB of data, and that write needs to be done to a used block, that means at least 508 KiB of extra writes have to occur first; that's an inflation rate of 127x. You might be able to write 2x or 3x as fast as you can to your 10,000 rpm hard drive, but you may also end up writing 127x more data. If you are using your drive for small files, write amplification will hurt you in the long run.

    Due to the nature of flash memory's operation, data cannot be directly overwritten as it can in a hard disk drive.

    (Source: http://en.wikipedia.org/wiki/Write_amplification)

    Typical block sizes include:

    • 32 pages of 512+16 bytes each for a block size of 16 KiB
    • 64 pages of 2,048+64 bytes each for a block size of 128 KiB
    • 64 pages of 4,096+128 bytes each for a block size of 256 KiB
    • 128 pages of 4,096+128 bytes each for a block size of 512 KiB

    (Source: http://en.wikipedia.org/wiki/Flash_memory)

  2. Long-Term Storage. Magnetic storage mediums often retain data longer when un-powered, so hard drives are better for long term archiving than NAND-Flash SSDs.

    When stored offline (un-powered in shelf) in long term, the magnetic medium of HDD retains data significantly longer than flash memory used in SSDs.

    (Source: http://en.wikipedia.org/wiki/Solid-state_drive)

  3. Limited lifespan. A hard drive can be re-written to until the drive breaks from wear and tear, but a NAND-Flash SSD can only reuse its pages a certain number of times. The number varies, but let's say it's 5000 times: if you reuse that page one time per day it will take over 13 years to wear out the page. This is on par with a hard drive's lifespan but that's true only without factoring in write amplification. When the number is being halved or quartered it suddenly doesn't seem so big.

    MLC NAND flash is typically rated at about 5–10 k cycles for medium-capacity applications (Samsung K9G8G08U0M) and 1–3 k cycles for high-capacity applications

    (Source: http://en.wikipedia.org/wiki/Flash_memory)

  4. Power Failure. NAND-Flash drives don't do well with power-failures.

    Bit corruption hit three devices; three had shorn writes; eight had serializability errors; one device lost one third of its data; and one SSD bricked.

    (Source: http://www.zdnet.com/how-ssd-power-faults-scramble-your-data-7000011979/)

  5. Read Limits. You can only read data from a cell a certain number of times between erases before other cells in that block have their data damaged. To avoid this, the drive will automatically move data if the read threshold is reached. However, this contributes to write amplification. This likely won't be a problem for most home users because the read limit is very high, but for hosting websites that get high traffic it could have an impact.

    If reading continually from one cell, that cell will not fail but rather one of the surrounding cells on a subsequent read. To avoid the read disturb problem the flash controller will typically count the total number of reads to a block since the last erase

    (Source: http://en.wikipedia.org/wiki/Flash_memory)

Oliver Salzburg
  • 86,445
  • 63
  • 260
  • 306
Robin Hood
  • 3,411
  • 2
  • 19
  • 37
  • 4) Power-failure: should we assume a small UPS? – smci Nov 01 '14 at 23:14
  • @smci or backup you data. – Robin Hood Nov 02 '14 at 06:11
  • 1
    Unfortunately, a UPS for any decent gaming desktop PC would need to be a line-interactive or double-conversion unit with pure sine-wave output. These run anywhere from $300 to $750 or more; exceptionally high-powered systems may require a 20-amp socket. – bwDraco Nov 02 '14 at 06:19
  • 9
    @DragonLord A "decent gaming desktop PC" can easily cost $1500 or more by the time you add up all the hardware within the computer itself. Probably more if you add the external peripherals. Even a cheap UPS is likely to prolong the life of that equipment (because of mains filtering) *and* it'll save you when the inevitable power problem hits. It doesn't need to be able to keep the fully-powered system running for long; 3-4 minutes is plenty long enough in most cases to automatically execute a safe, orderly system shutdown if the power goes out. Seems an appropriate tradeoff either way to me. – user Nov 02 '14 at 09:42
  • 3
    @DragonLord Why would a gaming desktop, powered by a switch-mode power supply, require a "sine-wave" input? – AndrejaKo Nov 02 '14 at 12:20
  • 1
    @AndrejaKo - Some active PFC systems apparently don't play nice with modified sine. For example, [some Seasonic supplies](http://www.xbitlabs.com/articles/cases/display/seasonic-g360-g550-gold_3.html) won't successfully switch to battery on a modified sine UPS when they're under high load. And I believe modified sine is generally inadvisable in countries that use 240V. – Compro01 Nov 02 '14 at 14:45
  • It seems to me that a device that is sensitive to power failures ought to have a built in "UPS" (think laptop's battery). You could then design it so that an external power failure immediately disables all non-essential circuitry (graphics etc) and use the capacity of the battery to execute appropriate safe shutdown. Such a solution should cost much less than $300... should at least be an option for a high end system. – Floris Nov 03 '14 at 14:58
  • @Floris Batteries degrade with time. It is far from trivial to determine what is "essential circuitry" in a general case. Some systems require a prolonged shutdown procedure. Batteries can develop problems (think Dreamliner for an extreme example; the same thing occasionally happens e.g. with cellphone batteries). And so on. – user Nov 03 '14 at 15:09
  • @MichaelKjörling - maybe it wasn't clear that I made my comment in response to the discussion on using UPS. It is precisely because it is far from trivial to determine what is "essential circuitry" that this would best be done by the manufacturer of the computer. And the issue of battery life is equally valid for a UPS (which is, after all, a battery powered device no?). Your point about vulnerability of LiPo batteries in particular (engineered for lightness rather than longevity) is well taken. But for a desktop system an extra 100 grams will not be a deal killer. – Floris Nov 03 '14 at 15:12
  • 1
    @DragonLord I heartily disagree with your cost estimate for a gaming PC UPS. Having built gaming PCs for many years, I find a $100 UPS is more than sufficient to buy me time to shut down. Additionally, the few minutes of capacity I have with it is enough to obviate any issues whatsoever from shorter outages/brownouts/spikes. – Doktor J Nov 06 '14 at 18:18
  • 3
    @AndrejaKo, I guess Seasonic makes bad power supplies and one should avoid that brand. I've never seen any trouble from a modified sine wave line interactive ups. – psusi Nov 10 '14 at 16:48
  • I think write amplification is overstated a lot in this answer; there's algorithms that do a lot better for real-world workloads than this worst-case scenario. You can easily guarantee that, regardless of workload, the write overhead is the inverse of the space overhead by simply going round robin; with an SSD that has a factor of 10% = 0.1 more space than it advertises to the OS, that means you get at most, regardless of workload, a write amplification factor of 1/0.1 = 10. – G. Bach Mar 03 '17 at 12:53
  • I think that Long Term Storage is one point. I think that is the major point (perhaps the only one) for choosing a 10k HDD over a SSD now – Fabiano Tarlao May 06 '19 at 13:51
23

Tons of bad answers here from people that obviously only know low end SSD.

There is one reason - Price. Mostly if you do not need the performance. Once you need the IOPS budget a SSD (even in a Raid 5) gives you - anything else does not matter.

10K SAS/SATA drive: around 350 IOPS. SSD: The ones I use - last years model, enterprise - 35000

Go figure - either I need the speed, or I do not. If I do not, large discs beat everything. Cheap, good. If I need the speed, SSD's rule (and yes, SAS has advantages, but seriously guys, you can get enterprise SATA discs as easily as "look up the part number and call a distributor").

Now endurance. Those SSD I use are "mid quality". 960GB Samsun 843T's reconfigured toi 750GB the Samsung warranty covers 5 full writes per day over 5 years. That is 3500GB written every day. Before warranty runs out. Higher end models are good for 15 - 25 complete writes per day.

We move our in house virtualization platform from Velociraptor (yes, you can get them in a real 2.5" configuration if you are smart enough to look up a part number and call a distributor) with a Raid 50 of SSD and while the cost was "significantly higher" the performance went from 60MB/sec to 650. I have zero latency increase under normal load even during backups. Endurance? Again, my warranty is quite clear on that ;)

TomTom
  • 1,257
  • 7
  • 9
  • 1
    *reconfigured toi* Is there a typo? – A.L Nov 15 '14 at 20:31
  • I like your answer `either I need the speed, or I do not.` But I don't understand how the writes per day relates to the write amplification referenced by Robin Hood. Taking the 127x write magnification and applying it to the "writes per day" spec, drops the 3500GB per day down to about 30GB writes per day, doesn't it? Even the high-end drives (25 writes per day) gives you about 150GB per day. Obviously, that is plenty for many uses, but my impression is that SSD enthusiasts are not comparing apples to apples. Or perhaps I am misunderstanding and someone can explain how those relate to me. – GlennFromIowa Nov 21 '14 at 20:37
  • 1
    No. See, in my particular case I have: 1 GB write cache on the raid contorller AND.... this particular SSD has a 1GB internal write cache again. Both caches are protected by capacitors - so a power failure results in a clean write all the way down. No write amplification. On top, the particular use case makes bulky writes on top. No write amplification at all. THat is mostly something for reglar desktops with non-caching SSD. And these are normally end user SSD. Anything enterprise uses capcitor backed caches for quite some time now. – TomTom Nov 22 '14 at 08:15
  • 1
    Could you add references where one can read up on the capacitor protection for buffers and caches? – G. Bach Mar 03 '17 at 13:00
20

Aside from cost, is there still a reason to choose a 10K RPM (or faster) hard drive over an SSD?

Isn't it obvious? Capacity. SSDs simply can't compete on capacity. If you care that much more about performance than capacity and want a single disk solution, an SSD is for you. If you prefer more capacity, you can go with a raid array of HDDs to get plenty of capacity and make up a good portion of the performance gap.

psusi
  • 7,837
  • 21
  • 25
  • Though in all honesty, by the time you are making up the performance gap between SSDs and HDDs by using HDDs, you are pretty close to closing the price gap between them per gigabyte of available storage. And the ugly truth is that while mirroring (RAID 1) can be great for improving the performance of read-intensive workloads, you *still* only get a single drive's worth of performance out of them for *write*-intensive workloads. – user Nov 02 '14 at 09:47
  • 3
    @MichaelKjörling, I don't know.. last xmas I picked up 3 1 TB WD blue ( 7200 rpm ) drives for $50 each and put them in mix of raid10 for the OS ( better random read ) and raid5 for media ( better capacity and sequential write ). About the same price as an SSD only 10+ times more capacity, and at least sequential throughput is in the same range as an SSD at 560 MB/s... and of course, it's redundant so if a drive fails I'm ok. An SSD is still going to have better totally random performance, but in practice, you never do 100% random IO so under real world loads it is pretty close. – psusi Nov 02 '14 at 14:10
  • Depends on what your "real world loads" are about. IOPS *is* a factor (and a very important one) especially the minute you start thinking about multi-user access. For a single-user system, agreed, not as much, but can still make a noticable difference in certain workloads. A 7200 rpm drive can handle on the order of 100 IOPS. A slow SSD might give you 1,000-10,000 IOPS, a fast one upwards of 100,000. It isn't hard to get high sequential throughput with HDDs, but very few workloads are purely sequential in nature; most are more like randomly distributed, small-size sequential I/O. – user Nov 02 '14 at 14:18
  • @psusi The only real world uses where a Raid 5 array is anywhere close to a SSD is purely sequential reads/writes. Which for normal users is pretty much only streaming media and similar things. Sure for those things nobody would use SSDs, but if you want to compare how reactive an OS is, how it handles concurrent accesses, gaming, Photoshop, starting programs,.. 3 1 TB WD blues are not even in the same league as a single cheap SSD. – Voo Nov 02 '14 at 21:13
  • 3
    @MichaelKjörling, since this is superuser and not serverfault, it is assumed we're talking desktops here. IOPS is purely a database server thing where it is assumed that you have a large data set being queried that will generate a lot of small random IO. Desktop workloads don't ever get *that* random or small. – psusi Nov 03 '14 at 00:02
  • @Voo, gaming tends not to do much IO at all and photoshop is going to read a large photo sequentially. Starting applications doesn't generate fully sequential IO, but it isn't fully small random IO either. Also I said the raid10 was for the OS since it handles the random reads better than raid5. – psusi Nov 03 '14 at 01:03
19

Speaking as a Storage Engineer, we've been deploying flash across the environment. The reasons we aren't doing so faster are:

  • cost. It remains eye wateringly expensive (especially for 'enterprise grade') - may not look like much on a 'per server' basis, but adds up to shockingly large numbers when you're talking multiple petabytes.

  • density. It's related to cost - data centre space costs money and you need additional RAID controllers and supporting infrastructure. SSDs are only just starting to catch up with the larger size spinning platters. (And there's a price differential there too).

If you could ignore cost entirely, then we'd be all SSD. (Or 'EFD' as some vendors prefer to rebadge them, to differentiate 'enterprise' from 'consumer').

One of the biggest problems most 'enterprises' have is that pretty fundamentally - terabytes are cheap, but IOPs are expensive. SSDs give a good price-per-IOP, which makes them attractive - providing your storage provisioning model includes some thought as to IO requirements.

Sobrique
  • 446
  • 2
  • 8
6

Enterprise SAS disks have their place in the enterprise. You buy them for reliability and speed. Some SAS drives also support the SATA interface while others are only SAS. The main difference is the difference is occurrence of the URE or Unrecoverable read Error. Normal consumer drives are usually 1 in 10 ^ -14. Enterprise SATA and SAS+SATA drives are 10 ^ -15 while pure SAS drives, the real enterprise drives are 10 ^ -16. So there certainly is a place for enterprise disks in the world. They are just really expensive.

SSD are vulnerable to the same URE error but it's not that easy to know when or how it will happen since the makers don't tell you the rate of occurrence on many devices. Though some ssd controller makers say they have stellar numbers like Sandforce [1]. There are also enterprise sas based ssd's which have a ure of 10 ^ -17 or -18.

Right now for the money I don't think there's any reason to go for a raptor drive. I think the main selling point of the product was the lower cost for larger storage space and higher seeking speed. But now as 1TB ssd's are getting cheaper and cheaper these products will likely not be around much longer. I can only find it under the workstation section of the western digital site. 1TB of storage for $240 is much cheaper than a 1TB SSD. There's your answer.

[1] http://www.zdnet.com/blog/storage/how-ssds-can-hose-your-data/1423

Biff
  • 195
  • 4
  • I'm increasingly frowning at people who suggest SATA for enterprise usage. 3TB SATA drives may LOOK like a good option - especially when you RAID-6 for resilience - but they have a truly awful IOP-per-TB ratio. We've ended up with absurd overcapacity in some scenarios (or short stroked disks, which is the same thing really) because the amount of IO needed for a serious system is WAY more than the 25 IOPs/TB you get out of a 3TB SATA drive. – Sobrique Nov 07 '14 at 16:30
  • Lots of enterprise usage is byte-heavy but not IOPS-heavy. For example, compliance logs. – Dan Pritts Jan 14 '15 at 15:57
  • I'd dispute that 'lots'. Yes, there are specific scenarios where this holds true, and you genuinely don't care that the performance of your storage system is abysmal. Of course, you may find a tape archive system is more appropriate at that point. But in my experience - _most_ customers have expectations based on their home system - and enterprise RAID-6 SATA isn't even that quick. – Sobrique Jan 17 '15 at 12:13
4

I see no reason not to use SAS SSDs over SAS HDD. However, if presented with the choice between a SAS HDD and a SATA SSD, my enterprise choice might well be the SAS drive.

Reason: SAS has better error recovery. A non-RAID edition SATA HDD might hang the whole bus (and with that possibly deny usage of the whole server) when it dies. A SAS-based system would just lose one disk. If that is a disk in a RAID array then there is nothing stopping the server from being used until end of business, followed by a drive replacement.

Note that this point is moot is you use SAS SSD's.


[Edit] tried to put this in a comment but I have no markup there.

I never said that the SAS controller will connect to another drive. But it will handle failure more gracefully and the other drives on the same backplane will remain reachable.

Example with SAS:

SAS HBA ----- [Backplane]
              |  |  |  |
              D1 D2 D3 D4

If one drive fails, it will get dropped by the HBA or the RAID card.

The other 3 drives are fine.
Assuming the drives are in a RAID array, the data will still be there and will remain accessible.


Now with SATA:

SATA  ----- [port multiplier]
              |  |  |  |
              D1 D2 D3 D4

One drive fails.
The communication between the SATA port on the motherboard and the other three drives will likely lock up. This can happen because either the SATA controller hangs or the port multiplier has no way to recover.

Although we still have 3 working drives, we have no communication with them. No communication means no access to the data.

Powering down and pulling a broken drive is not hard, but I prefer to do that outside business hours. SAS makes it more likely that I can do that.

Hennes
  • 64,768
  • 7
  • 111
  • 168
  • 2
    Isn't this why there are NAS-optimized SATA hard drives with TLER? (VelociRaptors have this feature as well.) – bwDraco Nov 01 '14 at 21:16
  • 1
    No, though it is part of it. TLER just means that the drive will give up on reading a failed sector between 7 to 12 seconds, after which the host (read: the computer with HW or SW RAID) can drop the drive and fall back to another drive to get the requested data. The SAS protocol means it will be able to connect to another drive rather than face a hung controller/channel/bus/portmultipier/$whatever_your_setup_is. – Hennes Nov 01 '14 at 21:54
  • @Hannes this makes zero sense. Even in SAS the controller will not magically connect to another drive - which would be a totally useless feature as this other drive would not magically have the same data... SAS is not a replacement for RAID and in a RAID there is no "magiclly connect to another drive". – TomTom Nov 02 '14 at 18:17
  • I never said that the SAS controller will connect to another drive. But it will handle failure more gracefully and the other drives on the same backplane will remain reachable. E.g. `SAS HBA ----- Backplane -- 6 SAS-drives`. If one drive fails it will get dropped. The other 5 will keep working. Assuming the drive from a RAID array the data will still be there and accessive. `SATA ------ Port multiplier/backplane - 6 SATA drives` One drive fails. The port multiplier probably gets locked. We still have 5 working drives but no communication with them. – Hennes Nov 02 '14 at 21:58
  • 3
    You make a good case against SATA port multipliers, but not against SATA disks. Using a 4-port SATA card, or hooking up SATA disks to a SAS controller, will nullify this example. – Dan Pritts Nov 03 '14 at 00:29
  • True. And the same holds as long as you can directly connect them directly to the SAS HBA. But once you need more drives you will either need the SAS topology (SAS expander. Unique IDs per device) or for STA you will port multiplier). However I should have made that a lot more explicit. My original post was way to short and cut a lot of corners. Bookmarked to improve it once I get home. – Hennes Nov 03 '14 at 11:03
0

I'm missing some relevant criteria in the question:

(Leaving out archival storage (usually tapes) which don't need to be 'online' (which doesn't necessarily refer to being available via the internet))

  • Archival storage which must be available (without manual intervention loading physical medium)
  • Storage intended to be available at maximum possible speed (running your OS, Database, webserver-front-end-cache, Audio-recording/processing 'buffer', etc).

Consider the scenario of a webserver (as example):
Best speed for commonly requested data would be all in memory (like a cache). But going towards several hundreds of GB that becomes costly (and physically large) to do in memory-banks.

Between the spinning HD and MemoryBanks is an interesting option: SSD. It should be considered as a consumable (not really longterm reliable storage, mainly because of the high drop-out rates and warranty will give you a new consumable, not your data back). Especially since it's going to be hit with a lot of reads and writes (say a DAW, etc).

Now every X-amount of time you are going to backup your consumable to your storage (that's not facing front-end work-load). And every reboot (or failed consumable) you pump the archived data to your front-end consumable.

Now how fast (performance) do you need to have (disk-wise) on your storage before you hit the first other bottleneck (like for example, network-throughput) when communicating with your cache..??
If the answer to that question is low: then select low-rpm enterprise class disks. If on the other hand the answer is high: select high-rpm enterprise class disks.

In other words: are you really trying to store something (hoping you'll never need the backup tape), use common HD's. If you want to serve data (stored elsewhere) or accept data or interact with large data (like DB), then SSD is a good option.

GitaarLAB
  • 125
  • 8
-1

Not mentioned in other answers, but the cost of a desktop SSD vs an enterprise HDD today is approximately the same. Long gone are the times when SSDs were considerably more expensive. Consider this 300GB HDD (2.5in):

Which works out to C$ 125.17 / 300GB = C$ 0.42 / GB.

Now consider a 256GB SSD (there are no 300GB available for SSDs):

Which is C$ 115.98 / 256GB = C$ 0.45 / GB.

As you can see the difference is not significant enough to favour a mechanical hard drive, unless you are really doing a lot of writes. Modern SSDs are capable of handling ~70GB of writes per day, and the standard warranty is 3 years. This is usually enough for most applications.

If you worry about reliability of SSDs in general, you can compare MTBF (to see it's actually the same or better than mechanical hard drives, 1.6M hours and 1.5M hours for the above examples). Or just make a RAID, if you don't trust any numbers.

Victor Zakharov
  • 636
  • 3
  • 11
  • 19
  • 7
    That may be true, but a comparison of consumer-class SSD with enterprise-class HDD is meaningless. If you don't need enterprise-class hardware, then you could have chosen a consumer-class HDD which would be *much* cheaper than the consumer-class SSD. No one with a lick of sense is going to swap out their enterprise-class HDD with consumer-class SSD because it costs about the same. – Chris Pratt Nov 07 '14 at 15:29
  • @ChrisPratt: You are missing the point that consumer grade HDDs are much-much worse than consumer grade SSDs. I.e. even a small shop cannot afford to have server racks fitted with consumer HDDs, they are just not meant to handle 24/7 loads. SSDs on the other hand are fine with that, they don't produce as much heat and most operations are reads, so it does not wear them out at all. This is especially true for databases. HDDs wear is mechanical wear, so that's the difference. – Victor Zakharov Nov 07 '14 at 16:24
  • 1
    So, essentially your contention is that consumer-grade SSDs will always have a longer lifespan that consumer-grade HDDs? Got data to back that up? – Chris Pratt Nov 07 '14 at 16:26
  • @ChrisPratt: Unless a company provides data conversion services, i.e. need to convert/write 100GB of data per hour, backup services or similar, I don't see why SSDs won't work. – Victor Zakharov Nov 07 '14 at 16:26
  • @ChrisPratt: Correct. You can check MTBF, for example - most SSDs have 2M hours, most consumer HDDs had 700K last time I checked. Also a quick google search found this - [SSD Annual Failure Rates Around 1.5%, HDDs About 5%](http://hardware.slashdot.org/story/13/09/12/2228217/ssd-annual-failure-rates-around-15-hdds-about-5). Also note that not SSDs are created the same, I don't want to advertise, but some are 10x more reliable by return statistics. There is no significant lifespan difference for HDDs between brands, from what I know. So that's 30 times reliability difference SSDs vs HDDs. – Victor Zakharov Nov 07 '14 at 16:30
  • @ChrisPratt: Sorry, my maths was off, that makes it 15 times. This also coincides with another statistics I've read previously, 3% failure rate for some SSDs, 0.3% for others. – Victor Zakharov Nov 07 '14 at 16:36