Galvanization of a corpse: how to revive a broken HDD for storing something unnecessary
I recently got a broken external hard drive ... Well, how did you get it? I bought it on the cheap myself.
A disk is like a disk: an iron box, inside - a USB2SATA controller and a Samsung laptop disk for 1 TB. . According to the seller’s description, it turned out that the USB controller was buggy. First, they say, he writes and reads well, and then gradually begins to slow down and generally falls off. The phenomenon for external drives without additional power is quite common, so of course I believed him. Well, what - it’s cheap.
So, I joyfully disassemble the box, take out the drive from there and stick it into the adapter checked by time and adversity. The disk turned on, wound up, determined, and even mounted on Linux. An NTFS file system and a dozen films were found on the disk. No, not about erotic adventures, but quite the contrary: there are all kinds of “Leviathans”. It would seem - cheers! But no, it was just beginning.
Viewing SMART showed a disappointing picture: the Raw Read Error Rate attribute dropped to unity (at threshold 51), which means only one thing: the disk has something very, very wrong with reading from the plates. The rest of the attributes, however, were within reason, but that was no easier.
An attempt to format the disk led to the expected result: a write error. You could, of course, make a list of bad sectors with the regular badblocks utility, and then slip this list when creating the file system. But I rejected this idea as impractical: it would take too long to wait for the result. And, as it turned out later, a compiled list of sectors would be useless: in damaged areas, sectors are unstable, so what is read once, next time may produce a read error.
Having played enough with all sorts of utilities, I found out the following details:
The thought immediately arose: what if we split the disk into two or three partitions so that the “broken fields” remain between them? Then the disk can be used to store something that is not very valuable (“watch it once” films, for example). Naturally, for this you first need to find out the boundaries of the "good" and "broken" areas.
No sooner said than done. A utility was written on the knee that reads from the disk until a bad sector is caught. After that, the utility marked as an unsuccessful (in its own plate, of course) an entire area of a given length. Next, the marked area was skipped (why check it - it was already marked as bad) and the utility read the sectors further. After a couple of experiments, it was decided to mark the failed area of 10 megabytes: this is already large enough for the utility to work quickly, but also small enough so that the loss of disk space becomes too large.
For clarity, the result of the work was recorded as a picture: white dots - good sectors, red - bad, gray - a bad area around the bad sectors. After almost a day of work, the list of broken areas and a clear picture of their location were ready.
Interesting, isn't it? There were much more damaged areas than I imagined, but undamaged areas clearly account for more than half of the disk space. It seems to be a pity to lose so much space, but I do not want to fence a dozen small sections.
But we have long been the 21st century, the time of new technologies and disk arrays! So, you can glue one disk array from these small partitions, create a file system on it and not know the grief.
On the map of the beaten areas, a mega-team was created to create partitions. I used the GPT to not worry about which ones should be primary and which ones extended:
The team worked for quite some time (several minutes). A total of 24 (!) Sections, each of its own size.
The next step is to create a single disk from them. The perfectionist inside me suggested that it would be most correct to muddle some sort of RAID6 array that is fault tolerant. The practitioner objected that all the same, the section that fell into the astral plane would have nothing to replace, so that the usual JBOD would come down - why lose space in vain? The practitioner won:
That's it. It remains to create a file system and mount an animated disk:
The disk turned out to be quite capacious, 763 gigabytes (i.e., it was possible to use 83% of the disk capacity). In other words, only 17% of the initial terabyte went to heap:
The test set of garbage films was uploaded to the disk without errors. True, the write speed was small and floated from 6 to 25 megabytes per second. Reading was stable at a speed of 25-30 mb / s, that is, it was limited to an adapter connected to USB 2.0.
Of course, such a perversion cannot be used to store something important, but it can be useful as entertainment. When the question is, disassemble the disk on magnets or torment first, my answer: “of course, torment!”.
Finally - a link to the repository with the utility: github.com/dishather/showbadblocks
A disk is like a disk: an iron box, inside - a USB2SATA controller and a Samsung laptop disk for 1 TB. . According to the seller’s description, it turned out that the USB controller was buggy. First, they say, he writes and reads well, and then gradually begins to slow down and generally falls off. The phenomenon for external drives without additional power is quite common, so of course I believed him. Well, what - it’s cheap.
So, I joyfully disassemble the box, take out the drive from there and stick it into the adapter checked by time and adversity. The disk turned on, wound up, determined, and even mounted on Linux. An NTFS file system and a dozen films were found on the disk. No, not about erotic adventures, but quite the contrary: there are all kinds of “Leviathans”. It would seem - cheers! But no, it was just beginning.
Viewing SMART showed a disappointing picture: the Raw Read Error Rate attribute dropped to unity (at threshold 51), which means only one thing: the disk has something very, very wrong with reading from the plates. The rest of the attributes, however, were within reason, but that was no easier.
An attempt to format the disk led to the expected result: a write error. You could, of course, make a list of bad sectors with the regular badblocks utility, and then slip this list when creating the file system. But I rejected this idea as impractical: it would take too long to wait for the result. And, as it turned out later, a compiled list of sectors would be useless: in damaged areas, sectors are unstable, so what is read once, next time may produce a read error.
Having played enough with all sorts of utilities, I found out the following details:
- There are many bad sectors, but they are not located randomly throughout the disk, but in dense groups. Between these groups there are quite large areas where reading and writing go without any problems.
- An attempt to fix a bad sector by overwriting (so that the controller replaces it with a backup one) does not work. Sometimes after that the sector is read, sometimes not. Moreover, sometimes an attempt to write to a bad sector causes the disk to “fall off” from the system for a few seconds (apparently, the controller of the disk itself resets). When reading, there are no resets, but it takes half a second or even more to try to read the beaten sector.
- “Broken areas” are pretty stable. So, the very first of them begins in the region of the 45th gigabyte from the beginning of the disk, and stretches quite far (how much, it was not possible to find out with a snap). Through trial and error, we also managed to find the beginning of the second such area somewhere in the middle of the disk.
The thought immediately arose: what if we split the disk into two or three partitions so that the “broken fields” remain between them? Then the disk can be used to store something that is not very valuable (“watch it once” films, for example). Naturally, for this you first need to find out the boundaries of the "good" and "broken" areas.
No sooner said than done. A utility was written on the knee that reads from the disk until a bad sector is caught. After that, the utility marked as an unsuccessful (in its own plate, of course) an entire area of a given length. Next, the marked area was skipped (why check it - it was already marked as bad) and the utility read the sectors further. After a couple of experiments, it was decided to mark the failed area of 10 megabytes: this is already large enough for the utility to work quickly, but also small enough so that the loss of disk space becomes too large.
For clarity, the result of the work was recorded as a picture: white dots - good sectors, red - bad, gray - a bad area around the bad sectors. After almost a day of work, the list of broken areas and a clear picture of their location were ready.
Here it is, this picture:

Interesting, isn't it? There were much more damaged areas than I imagined, but undamaged areas clearly account for more than half of the disk space. It seems to be a pity to lose so much space, but I do not want to fence a dozen small sections.
But we have long been the 21st century, the time of new technologies and disk arrays! So, you can glue one disk array from these small partitions, create a file system on it and not know the grief.
On the map of the beaten areas, a mega-team was created to create partitions. I used the GPT to not worry about which ones should be primary and which ones extended:
# parted -s -a none /dev/sdc unit s mkpart 1 20480 86466560 mkpart 2 102686720 134410240 mkpart 3 151347200 218193920 mkpart 4 235274240 285306880 mkpart 5 302489600 401612800 mkpart 6 418078720 449617920 mkpart 7 466206720 499712000 mkpart 8 516157440 548966400 mkpart 9 565186560 671539200 mkpart 10 687595520 824811520 mkpart 11 840089600 900280320 mkpart 12 915640320 976035840 mkpart 13 991354880 1078026240 mkpart 14 1092689920 1190871040 mkpart 15 1205288960 1353093120 mkpart 16 1366794240 1419919360 mkpart 17 1433600000 1485148160 mkpart 18 1497927680 1585192960 mkpart 19 1597624320 1620684800 mkpart 20 1632808960 1757368320 mkpart 21 1768263680 1790054400 mkpart 22 1800908800 1862307840 mkpart 23 1872199680 1927905280 mkpart 24 1937203200 1953504688
The team worked for quite some time (several minutes). A total of 24 (!) Sections, each of its own size.
Sections
# parted /dev/sdc print
Model: SAMSUNG HM100UI (scsi)
Disk /dev/sdc: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 10.5MB 44.3GB 44.3GB 1
2 52.6GB 68.8GB 16.2GB 2
3 77.5GB 112GB 34.2GB 3
4 120GB 146GB 25.6GB 4
5 155GB 206GB 50.8GB 5
6 214GB 230GB 16.1GB 6
7 239GB 256GB 17.2GB 7
8 264GB 281GB 16.8GB 8
9 289GB 344GB 54.5GB 9
10 352GB 422GB 70.3GB 10
11 430GB 461GB 30.8GB 11
12 469GB 500GB 30.9GB 12
13 508GB 552GB 44.4GB 13
14 559GB 610GB 50.3GB 14
15 617GB 693GB 75.7GB 15
16 700GB 727GB 27.2GB 16
17 734GB 760GB 26.4GB 17
18 767GB 812GB 44.7GB 18
19 818GB 830GB 11.8GB 19
20 836GB 900GB 63.8GB 20
21 905GB 917GB 11.2GB 21
22 922GB 954GB 31.4GB 22
23 959GB 987GB 28.5GB 23
24 992GB 1000GB 8346MB 24
The next step is to create a single disk from them. The perfectionist inside me suggested that it would be most correct to muddle some sort of RAID6 array that is fault tolerant. The practitioner objected that all the same, the section that fell into the astral plane would have nothing to replace, so that the usual JBOD would come down - why lose space in vain? The practitioner won:
# mdadm --create /dev/md0 --chunk=16 --level=linear --raid-devices=24 /dev/sdc1 /dev/sdc2 /dev/sdc3 /dev/sdc4 /dev/sdc5 /dev/sdc6 /dev/sdc7 /dev/sdc8 /dev/sdc9 /dev/sdc10 /dev/sdc11 /dev/sdc12 /dev/sdc13 /dev/sdc14 /dev/sdc15 /dev/sdc16 /dev/sdc17 /dev/sdc18 /dev/sdc19 /dev/sdc20 /dev/sdc21 /dev/sdc22 /dev/sdc23 /dev/sdc24
That's it. It remains to create a file system and mount an animated disk:
# mkfs.ext2 -m 0 /dev/md0
# mount /dev/md0 /mnt/ext
The disk turned out to be quite capacious, 763 gigabytes (i.e., it was possible to use 83% of the disk capacity). In other words, only 17% of the initial terabyte went to heap:
$ df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 9.2G 5.6G 3.2G 64% /
...
/dev/md0 763G 101G 662G 14% /mnt/ext
The test set of garbage films was uploaded to the disk without errors. True, the write speed was small and floated from 6 to 25 megabytes per second. Reading was stable at a speed of 25-30 mb / s, that is, it was limited to an adapter connected to USB 2.0.
Of course, such a perversion cannot be used to store something important, but it can be useful as entertainment. When the question is, disassemble the disk on magnets or torment first, my answer: “of course, torment!”.
Finally - a link to the repository with the utility: github.com/dishather/showbadblocks