1

I run a RAID10 machine with 4x Samsung 480GB SSD's. The drives are of identical age.

3 of the drives have a Wear_Leveling_Count 090, but one is now as low as 003. I don't understand why one of the drives is so much worse than the other 3. Their load and usage is identical. All drives were new when they were first installed 2 years ago.

The RAW value of the failing drive is 3058.

That said, the performance is still fine. Should this drive be replaced ASAP or can I ignore it? Replacing the drive means a lot of downtime for my sites.

Raw output of smartctl can be found here: https://pastebin.com/ktdEGFqS

smartctl -a /dev/sdd
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-1127.19.1.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
 
=== START OF INFORMATION SECTION ===
Model Family:     Samsung based SSDs
Device Model:     SAMSUNG MZ7GE480HMHP-00003
Serial Number:    S1M8NYAF700916
LU WWN Device Id: 5 002538 800184545
Firmware Version: EXT0303Q
User Capacity:    480,103,981,056 bytes [480 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4c
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Aug  5 10:04:25 2023 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
 
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
See vendor-specific Attribute list for failed Attributes.
 
General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (    0) seconds.
Offline data collection
capabilities:                    (0x53) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 110) minutes.
SCT capabilities:              (0x003d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.
 
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   085   085   000    Old_age   Always       -       74986
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       16
177 Wear_Leveling_Count     0x0013   003   003   005    Pre-fail  Always   FAILING_NOW 3058
179 Used_Rsvd_Blk_Cnt_Tot   0x0013   100   100   010    Pre-fail  Always       -       0
180 Unused_Rsvd_Blk_Cnt_Tot 0x0013   100   100   010    Pre-fail  Always       -       8796
181 Program_Fail_Cnt_Total  0x0032   100   100   000    Old_age   Always       -       0
182 Erase_Fail_Count_Total  0x0032   100   100   000    Old_age   Always       -       0
183 Runtime_Bad_Block       0x0013   100   100   010    Pre-fail  Always       -       0
184 End-to-End_Error        0x0033   100   100   097    Pre-fail  Always       -       0
187 Uncorrectable_Error_Cnt 0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0032   067   043   000    Old_age   Always       -       33
195 ECC_Error_Rate          0x001a   200   200   000    Old_age   Always       -       0
199 CRC_Error_Count         0x003e   100   100   000    Old_age   Always       -       0
202 Exception_Mode_Status   0x0033   100   100   010    Pre-fail  Always       -       0
235 POR_Recovery_Count      0x0012   099   099   000    Old_age   Always       -       12
241 Total_LBAs_Written      0x0032   099   099   000    Old_age   Always       -       2711366975590
 
SMART Error Log Version: 1
No Errors Logged
 
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%         1         -
# 2  Short offline       Completed without error       00%         0         -
# 3  Extended offline    Completed without error       00%         5         -
# 4  Short offline       Completed without error       00%         0         -
# 5  Short offline       Completed without error       00%       432         -
# 6  Extended offline    Completed without error       00%       412         -
# 7  Short offline       Completed without error       00%       408         -
# 8  Short offline       Completed without error       00%       384         -
# 9  Short offline       Completed without error       00%       360         -
#10  Short offline       Completed without error       00%       336         -
#11  Short offline       Completed without error       00%       312         -
#12  Short offline       Completed without error       00%       288         -
#13  Short offline       Completed without error       00%       264         -
#14  Extended offline    Completed without error       00%       245         -
#15  Short offline       Completed without error       00%       240         -
#16  Short offline       Completed without error       00%       216         -
#17  Short offline       Completed without error       00%       192         -
#18  Short offline       Completed without error       00%       168         -
#19  Short offline       Completed without error       00%       144         -
 
SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
Joep van Steen
  • 4,730
  • 1
  • 17
  • 34
Mr.Boon
  • 519
  • 2
  • 6
  • 15
  • exact same firmware too? – Joep van Steen Aug 05 '23 at 11:54
  • can we see smart for one of the other drives for comparison? – Joep van Steen Aug 05 '23 at 12:02
  • Am I missing something or is the drive saying it's written over 1000 TB of data? Which seems to be the root of the problem – James P Aug 07 '23 at 13:05
  • No, you're not missing anything ;) .. But what puzzles me is why one drive of an array gets that load while the others don't apparently .. – Joep van Steen Aug 07 '23 at 14:31
  • Since you did not provide us with additional info (SMART details from other drives in the array) I vote close for lack of detail. Yes, by all accounts replace the drive, but main question remains *why* this drive's wear is so much worse than the others. And therefor I feel we need more data: the SMART details for the other drives. – Joep van Steen Aug 20 '23 at 19:28

1 Answers1

-1

Your smartctl output says :

ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
177 Wear_Leveling_Count 0x0013 003 003 005 Pre-fail Always FAILING_NOW 3058

I truly dislike the FAILING_NOW flag.

Note also this horrifying warning in the output :

SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.

Although this disk is the same age as the other disks, it might have been weaker to start with when you bought them. Disk quality and endurance is just a matter of luck.

I think you should listen to the warning and replace this disk.

harrymc
  • 455,459
  • 31
  • 526
  • 924
  • Thank you. Yes my hoster also recommends a replacement, so I will do that. – Mr.Boon Aug 05 '23 at 10:07
  • You stated: "Disk quality and endurance is just a matter of luck". No, it is not. There are manufacturer-dependent quality differences with HDDs resulting in differing failure rates. Different cell technologies in SSDs determine the maximum number of read and write cycles. Furthermore SSD break differently - some fail completely others stay readible as tested by c't. – r2d3 Aug 06 '23 at 07:04
  • @r2d3: You're playing with words, as usual : Even among the same manufacturer and same disk model, not all disks are equal. – harrymc Aug 06 '23 at 08:41
  • Random failures within a certain disk model of a manufacturer do not justify to that disk quality and endurance is JUST a matter of luck. Although there is a random component in failure, there are systematic differences. As usual you are ignoring any scientifical sources that I cited already before when criticizing your postings. What you downplay by pretending I am "playing with words" is ignorance of knowledge on your side. – r2d3 Aug 06 '23 at 16:19
  • @r2d3: I haven't been insulted by you for some weeks now and I haven't been missing it. – harrymc Aug 06 '23 at 16:24
  • harrymc, I don't mind you collecting points obsessively in any area on Superuser. If you post such answers in the storage area though, you might read my reaction. Maybe you should rather read sources instead of classifying my posts as insults for pointing you out to them. Thank you! – r2d3 Aug 06 '23 at 16:31
  • How is it 'weaker', what does this mean? This is predicted failure based on amount of p/e cycles. Rather than immediately jumping to an answer it would be good to see SMART for the other drives too. – Joep van Steen Aug 06 '23 at 21:22
  • @JoepvanSteen: How can I know why it's "weaker" when the firmware can't even return the SMART attributes? And yes, the poster has compared the returned partial SMART data with the other disks and that's why he's surprised by one disk failing among 4 identical and relatively new disks. To note also that the disks are monitored by the hosting company, which was the one to signal the bad disk. There is no prediction here, it's a case of a disk totally failing prematurely and unexpectedly. It's rare but it happens and can only be vaguely "explained" as "manufacturing fault". – harrymc Aug 07 '23 at 07:17
  • I do wish that people would, before downvoting, read entirely the post and comments and not get influenced by the ranting of one user with ulterior motives. Responsible downvoting should be a subject on our site. – harrymc Aug 07 '23 at 07:19
  • I want to *see* SMART for one of the other drives, so including all attributes, firmware revision etc... The drive wearing quicker must have a cause, in this array all drives should have same amount of data written by host. Yes sure, I would swap out the drive too, but that could be written in a comment. Your answer is basically bunch of opinions. – Joep van Steen Aug 07 '23 at 11:23
  • @JoepvanSteen: My answer is basically pointing out to the poster the relevant parts in the smartctl report, explaining that he should really listen to his hosting company. Is smartctl also a bunch of opinions? The poster didn't ask for opinion about the other disks, only about what he should do with this one. – harrymc Aug 07 '23 at 11:34
  • Alarming wear leveling count he already knew, it was in OP. He clearly expresses he does not understand why this affects only this one drive and without SMART for one of the other drives we can not even try to answer without defaulting to vague assertions such as a drive being 'weaker'. I'm done discussing this. – Joep van Steen Aug 07 '23 at 12:06
  • @JoepvanSteen: I'm done too. Just that you're wrong about the question. The OP clearly asked: "Should this drive be replaced ASAP or can I ignore it?". I answered what was asked, and for that I'm being downvoted. Asking anything else is trying to second-guess the hosting company that has available much more information than just the SMART data of the 4 disks. – harrymc Aug 07 '23 at 12:16
  • Then you should answer, 'yes replace', without all the vague reasons and opinions about the phrasing of the tool. – Joep van Steen Aug 07 '23 at 14:29