0

I have a 2TB WD (My Passport Ultra) external HDD that has developed bad sectors, and I want to continue using it for non-essential/non-critical storage (I fortunately recovered all data that was on the drive, so data recovery is not the issue). The SMART info tells me that there are more bad sectors than spare ones.

My question is how to proceed with "ring-fencing" these bad sectors.

I understand from other posts here that just re-formatting the HDD, and letting the OS take care of marking the sectors as bad, so not re-using them, is not a good solution.

It was suggested elsewhere that I could partition the HDD, so the bad sectors are outside the partition. Is this really feasible, and if so, how to "tell" the partition software which part to exclude?

I have of course researched this topic a bit before posting, but as often is the case, quite some contradictory or vague information out there :

  • Re: bad sectors : some claim that once you start having bad sectors, these will "propagate" (e.g. because a bad sector often (?) is related to something physically wrong on the platter surface, hence every time the head moves over that area, the head "jumps" a bit and creates additional bad sectors); others claim that bad sectors are just "normal" for HDDs, so either the HDD itself marks them and uses a spare sector, or you have to do this manually, e.g. with chkdsk, but in any case you can still reliably use the HDD;
  • Re: continue using the disk : some say the disk could die within seconds/minutes, others says you can still use the disk for years once the bad sectors are not used anymore;
  • Re: bad sectors and spares : the info seems quite inconsistent and inconclusive about when a HDD is actually not fit for use anymore. It seems that as long as the HDD itself (the firmware/controller?) manages the "replacement" of bad sectors with spare ones all is fine, and in the SMART data you can see which % of the spares have been used, and if below a threshold, all is "green"; but once the user is confronted with a bad sector (presumably because the sector became bad after data was written to it, and you get CRC errors?), it seems that the consensus is to go into semi-panic mode, clone/recover what can be recovered, and throw the HDD in the fire. I am just making up the numbers, but if there are a few hundred (thousand?) of spare sectors, and the first few hundred of bad sectors are considered "normal", why the next few hundred of bad sectors suddenly are such a drama ? There are millions of sectors on a HDD, so is this rational behavior, statistically speaking ?

Similar questions on SU:

So lots of information, but nowhere a clear answer on (i) whether it is actually possible to create a partition to omit all bad sectors (and so that the head does never touches that bad area anymore), (ii) whether this will mitigate the risk that more bad sectors will appear, and (iii) some guidance how to accomplish this with a good/quality partitioning software (e.g. how much buffer to leave around the bad sectors) (there are some specific tools out there to create partitions that omit bad sectors, but they do not seem very reliable to me, but again, maybe I am just ignorant).

EDIT (Nov 2018)
Other SU discussion about similar topic :How can I partition out damaged sectors on a HDD?

Peter K.
  • 394
  • 1
  • 3
  • 13
  • 3
    Throw it away. If it's failing it will never get better, the rest will fail too, it's only a matter of time. Don't waste the effort. – Tetsujin Jan 23 '18 at 18:01
  • @Tetsujin Thanks, this seems indeed the majority opinion, but I would like to better understand the "why". It is a rarely used (maybe that's why there are bad sectors?) good-brand/premium-feel (for whatever that's worth) 2TB disk, so even with 100GB of bad sectors, I would still be happy to have a functioning 1.9TB HDD if I could isolate/discard these bad sectors. – Peter K. Jan 23 '18 at 18:11
  • Thing is, you don't know why they're bad, nor ever will without a clean room & microscope... It doesn't make any kind of sense to nurse it. You would never trust it to put anything important on, so it becomes an advanced floppy disk sneaker-net substitute & nothing more. – Tetsujin Jan 23 '18 at 18:18
  • 1
    @Tetsujin I agree about the non-important part. I will anyway have to buy a new 2TB HDD for the important data, but I want to keep on using the old one as a "nice to have" to exchange lots of data, or just as a third back-up for music/photos or so. – Peter K. Jan 23 '18 at 18:41
  • If there's a tiny speck of dust in the enclosure, or the head can't "float" on a thin film of air, then more damage occurs as the platter spins. BTW, there are usually some useful rare-earth magnets in an HDD, so it won't be a total waste if you salvage parts. – DrMoishe Pippik Jan 23 '18 at 18:54
  • 1
    What are the actual SMART numbers for reallocated, pending, and unrecoverable sectors? As for your why question: because bad sectors tend to develop logarithmically, so one or two and you may be OK for some time, but by the time you get a few hundred, the drive is well on its way to bricked. – psusi Jan 23 '18 at 19:09
  • @psusi I ran a `chkdsk /f/r/x` on the drive, which resulted in approx. 85 bad cluster replacements, but then after completion of Stage 4 (Used File Data verification) the process stopped with error `An unspecified error occurred (766f6c756d652e63 470)`). .Then I checked SMART numbers with CrystalDiskInfo : attribute: current / worst / (threshold) >> reallocated: 200 / 200 / (140); pending: 200 / 200 / (0) ; uncorrectable: 100 / 253 / (0). – Peter K. Jan 23 '18 at 23:39
  • Assuming the number in parenthesis is the RAW number, then 140 reallocated sectors is a pretty bad sign. I'd suggest using a linux live system to `dd` zeros over the whole drive, and `smartctl` to check the SMART status. If after that, it ends up being 140 reallocated sectors and otherwise everything is OK, you might keep using the drive for unimportant data, with regular runs of the SMART long selftest to check for new errors. – psusi Jan 24 '18 at 03:10
  • @psusi Sorry, this were the SMART standard (normalized) values, and the 140 the (WD defined I suppose) threshold value. The raw numbers are : reallocated = 0 ; uncorrectable = 0 ; pending = 260 (104HEX). Not sure whether 260 out of millions of sectors is a lot or not, I understand that it is the evolution that's important? So if a HDD controller only re-maps bad sectors during a "write" operation, I assume these sectors became "bad" after the data has been written? – Peter K. Jan 24 '18 at 12:20
  • @psusi So per your suggestion, can I then "force" the HDD controller to take care of remapping these bad sectors (if enough spares are available; otherwise these sectors are just marked "bad"?), by (not using Linux live system, but just MS Sysinternals command) e.g. (1) delete the files with problems (I have them identified) ; (2) write zeros to all empty space with `sdelete –z d:` (assuming that the HDD then will intercept this write commands and remap the bad sectors). And then keep a regular eye on the SMART data to see whether there is an evolution... – Peter K. Jan 24 '18 at 12:41
  • Yes; writing to a bad sector will force it to be reallocated, if needed. Sometimes the sector isn't bad at all and the write succeeds without remapping. This happens sometimes after a sudden power loss in the middle of a write. Since you have already run chkdsk though, Windows has already flagged those sectors as bad and will refuse to use them. – psusi Jan 24 '18 at 18:22
  • Personally I agree that non-critical files can be stored on such drives, provided that a [filesystem with checksumming and error correction capability](https://en.wikipedia.org/wiki/List_of_file_systems#File_systems_with_built-in_fault-tolerance) like ReFS, BtrFS or ZFS is used – phuclv Nov 12 '18 at 15:23
  • @phuclv Thanks for chiming in. Do you have any suggestions on how to use any of these file-systems on W10? MS actually dropped ReFS support (or actually the capability to create volumes) on most W10 versions (although maybe there is a separate systool that can be downloaded and used?), and WinZFS and WinBtrFS seem not really "production ready" (as Linux port to W10). – Peter K. Nov 12 '18 at 17:50
  • probably using Linux (either in a VM or directly) in that case would be better – phuclv Nov 13 '18 at 13:47
  • @phuclv Thanks, but I do not want to start learning about Linux ;) I am looking at bit further at ReFS to see whether any stand-alone systools exists to create volumes with my W10 version. – Peter K. Nov 13 '18 at 14:57
  • In my experience, a drive can remain stable with up to a few dozens bad sectors, beyond about 100 it can no longer be considered safe to use. I have a 3TB Western Digital HDD which has about 25 bad sectors, a few appeared when I copied a ~4GB video file to it, then the number increased when accessing that file in any way (even selecting it from Explorer which triggers the display of a thumbnail). So I moved that file through the command line into a dedicated folder, to remember not to touch it. The drive has been stable since (not used often though). – GabrielB May 22 '23 at 04:28
  • Based on what I wrote above, it would seem like having bad sectors located somewhere in a 4GB file that is not accessed ever again is a good enough safety margin so that the corresponding head doesn't hover too close to the weak area. But it the goal is to intentionally “ring-fence” a currently free area known to contain bad sectors (it's already tricky to pinpoint where they are located — I've provided some methods in q/1266135 & q/1157898), I wouldn't know how to proceed, as you can't predict or request where a file is going to be written. – GabrielB May 22 '23 at 04:52
  • One possible method would be to entirely fill the partition's free space with empty 4GB files created with the command `fsutil file createnew [name] 4294967296` (name changed manually or in a batch script with an automatically incremented number). Since fsutil creates files instantaneously, it doesn't actually write null bytes (so won't needlessly stress the drive), but still marks the corresponding clusters as allocated. Then upon analyzing which one(s) contain(s) bad sectors, move it/them into a dedicated folder and delete the others. **And never ever run defragmentation on that drive.** – GabrielB May 22 '23 at 05:03

0 Answers0