0

Does anyone have any data or experience with sparse file performance on SSDs under Windows, with the NTFS filesystem?

It seems like performance should be excellent since seek time is effectively zero, but if the filesystem block size does not match the SSD exactly it could cause write amplification. There are so many factors involved I was hoping that someone would have tried it already, rather than trying to speculate about it.

harrymc
  • 455,459
  • 31
  • 526
  • 924
  • 1
    I don't understand the question: Write amplification was solved for SSDs on the firmware level starting with Intel on 2008, so doesn't exist as a problem any more. As SSD cluster-size is usually much larger than the OS block size, this problem absolutely had to be solved by the manufacturers. Write amplification values on modern SSDs are on the order of 1.1, where 1.0 is the theoretical lowest limit. Only SSDs employing compression can go lower than that. – harrymc Oct 16 '14 at 06:07
  • Okay, fair enough. What I'm asking for is if anyone can give concrete information on how well sparse files work with SSDs, not simply speculation. Has anyone tested it using them extensively, for example, and then checked to see how much write amplification there was (manufacturer's offer tools that let you do that) or how much performance degradation there was (should be near zero)? –  Oct 16 '14 at 09:16
  • The performance will depend on where in the OS-sector are situated the non-zero bytes, meaning how many zero bytes the OS has to write because their OS sector contained some non-zero ones. This only applies to random-write files. Write amplification *for the application* will then be the size of sectors-data written by the OS divided by size of non-zero data. Do you mean here "application write amplification"? Because *for the OS* there will be no difference. – harrymc Oct 16 '14 at 15:56

0 Answers0