Storage media. What does the year 2015 bring to us and what should we expect beyond its horizon?
Increasing volumes of generated data require more and more sophisticated methods of storing them, while technological progress makes it possible to reduce the cost of storing information, which in turn stimulates the generation of more and more information. As a result, we have a clear picture, pushed from different angles, the development of data carriers is steadily going up. Scientific thought works in two fundamental directions, on the one hand, it is the development of methods of encoding information, and on the other, the improvement of the hardware component. The most widely represented technologies among storage media at the moment are HDD and SSD disks.

Manufacturers of classic "hard drives" continue to invest heavily in the development of this technology, squeezing more and more productivity out of it. Starting from the very first hard drive weighing 5 tons and weighing 5 MB, the IBM 350, after 60 years, disks that fit easily in the palm of your hand are already ready for mass production, having an impressive 10 TB on board. The most advanced Shigled Magnetic Recording (SMR) technology, according to which the first announced 10 TB hard drive will be produced, has a significant growth potential for the volume of data being posted, which will make it possible to get “hard drives” up to 20 TB in the coming years.

This technology is very progressive, its application has allowed more efficient use of the area of the metal plates themselves located inside the hard drive. Instead of wasting the valuable surface of the metal disks under the delimiting elements separating the recording sectors, it was decided to go a different way of forming the plate itself. Due to the layering of many balls, the material that will be rewritable, it was possible to increase the efficiency of writing to discs by a quarter, while the production price of such a “tiled” structure increased slightly. But having eliminated the boundaries of the sectors, engineers were faced with the problem of a significant drop in the speed of information processing by the drive, which was yet to be fought.
Speaking about the SMR technology, one can not help but recall another technical trick, after the application of which it really became commercially successful. It was possible to solve the problem with insufficient read / write speed of the "tiled" surface structure of the discs thanks to another advanced development - Two-Dimensional Magnetic Recording (TDMR). The two-dimensional magnetic recording system has eliminated the problem of smearing of the magnetic signal deposited on the surface of the disk. The fact is that earlier, having eliminated the clear boundaries between the adjacent recorded sectors of sectors on the disk, with their partial overlapping, the head of the reading mechanism needed to spend more time getting an unambiguous result in the determined magnetized sector. The solution was to use several read heads. By expanding the area of disk magnetization registration, it became possible to obtain more detailed information about a specific sector. After mathematical processing of a more complete image of the captured data, the engineers eliminated the "magnetic noise" from neighboring areas and get a clear result in an acceptable amount of time.

The third technological challenge for HDD was the development of a method of recording data with pre-heating of a sector that can be recorded on a medium, this method is called Heat-Assisted Magnetic Recording (HAMR). The recording technique itself provides for the placement of new “hard drives” of a laser on the writing heads, which will, before direct magnetization, heat the metal surface. Thanks to this specific recording method, as a result, the engineers managed to increase the clarity of the magnetization, which also helped to get rid of excess noise and increase the concentration of the recording. The mass production of data carriers using the HAMR principle was previously planned by Seagate in 2015, but recently this date has been moved to 2017.

Meanwhile, also, one of the leaders among manufacturers of data carriers - the Hitachi company, took a slightly different path. Replacing the familiar air with helium in the sealed housing of the carrier, the technicians were able to significantly reduce the viscosity of the medium, and this made it possible to place metal disks closer to one than it was available ever before. The result of this decision was the increased capacity of the entire medium, with preserved external dimensions.

All the latest innovations make storage media - HDD is very competitive in the IT market. But the problem is increasingly being raised by this technology. The initial principle underlying the HDD - the magnetic recording of information on plates, has actually exhausted itself entirely. Even the “tiled” arrangement of rewritable sectors and all the tricks associated with it are unlikely to be able to successfully resist the more accessible and productive flash technology in three to four years.
Over the past two decades, we have been witnessing an ever-growing process of developing flash memory. The development of this technology is truly similar to a bright flash of light. Flocking with many streams - technologies, from different areas of technological progress, into a single channel Solid-State Drive (SSD), the result today has become impressive.

Companies such as Intel and Samsung anticipate big dividends from those invested in SSD-related 3D NAND technology. Thanks to this development, engineers were able to compose flash memory crystals not only horizontally but also vertically, that is, to form bulk structures from semiconductors. Already, there is information about the existence of test samples created using 3D NAND technology from Samsung, whose chips can have up to 24 layers. Intel with its partner Micron predicts the release of chips up to 48 GB in late 2015 - early 2016. The proposed chip should be created using 32-deep 3D NAND technology with a multicellular system (MLC), which allows you to double the amount of information carrying one single semiconductor. The successes of Micron engineers in the development of MLC, give its employees the opportunity to claim that very soon they will be able to launch 1 TB SSD drives, which will become extremely compact. In addition to the advantages of compactness, according to the forecasts of the Intel subsidiary, the price will also be quite affordable, by 2018, SSD memory, in the super-bright segment, it will fall every 5 times.

Another technology that should ripen in 2015 is a 3-level flash structure - TLC. When it comes to flash memory, a memory cell is usually represented as an object with or without an electric charge, which in the usual sense is equivalent to a binary code 1 / 0. Innovative engineers looked at the matter from a slightly different angle, casting aside the established interpretation and wondering, and what if we start to consider not just the presence of a charge in the cell, but to measure the voltage range itself, which it carries? Equating a specific voltage value to unity, and any other range to zero, a binary system of calculation, one can also easily encode data carrier cells. A correctly posed question has borne fruit. Due to the fact that the materials used support a wide range of stresses,

MLC - this multi-level structure has four voltage ranges, which correspond to binary coding 00/01/10/11 - thereby actually being the equivalent of two first-level cells, that is, MLC technology allows you to double the recording density. Remembering the advantages of technology, it is impossible not to recall its disadvantages. The creation of chips in this way is fraught with various factors: additional costs in production, the process of writing data reading with this layout should be as accurate as possible. And of course, accelerated degradation of memory cells, as a result of which the product durability will go down, and the number of sector errors that arise will increase.
TLC - is able to operate with eight voltage levels, combining combinations of 1/0 at all three levels. This technology is able to give an increase in the information placed on the media 50% more than the MLC. The problem here lies in the fact that the structure of the carrier itself undergoes more dramatic structural changes than in the MLC. A more complex design of the product carries a significantly higher cost, and in order for such a solution to take its place in the market alongside competitors, at least five years must pass.

At the moment, samples of drives created on the basis of the combination of NAND 3D and TLC technologies are already on sale. A typical example is the 1TB Samsung 850 EVO SSD. The speed of writing / reading media is about 530 MB / s, while the number of IOPS is more than 90 thousand. Combining a large volume, performance, acceptable dimensions, reliability (the manufacturer’s warranty on the product is 5 years), the price of an SSD miracle reaches $ 500.
Experts expect that in 2015, storage media manufacturers will focus on improving and optimizing TLC technology. The main points of improvement for TLC will be the ways of using chips and the struggle to reduce errors that occur during the reading / writing of cells. One of the promising ways to eliminate errors can be a chip developed by Silicon Motion engineers, which has three logical levels of suppression of arising ambiguities.
The first level of Low Density Parity Check (LDPC) is a specially developed method of data encoding thanks to which it will be possible to eliminate many errors at the very first stage. Thanks to the mathematical algorithm of data processing, it will be possible to guaranteedly detect and eliminate write errors without any significant loss in performance. The logic of the LDPC system itself was developed back in the 1960s, but due to the weakness of technical means, it was not possible to fully realize its potential. In the 1990s, when the volume of processed data reached a certain, critical level, the finest hour came for LDPC. Finding itself in WIFI networks, 10 GB networks, digital television - the algorithm continued its service for the benefit of SSD.
In addition to encoding data, LDPC can also become a tool for tracking and adjusting the voltage of the electric charge of memory arrays, for more efficient functioning of the TLC. The electrical properties of semiconductors, and the arrays formed by them, undergo some changes over time. Changes can be both short-term, associated with temperature, and long-term, associated with the degradation of the material. Based on statistical information, the LDPC algorithm helps to minimize errors for the above reasons.
This layering of technology allows you to compensate for most of the disadvantages of TLC technology and makes it more cost-effective. The biggest unsolved issue of TLC-enabled SSDs is still the short life of the recording cells. Given the pros and cons, such discs have become highly demanded by consumers that they use this kind of memory for information that is often accessed and does not undergo any special changes.

At the moment, it is difficult to see a potential competitor to a combination of flash memory and hard drives. Moreover, even serious work aimed at finding this competitor (s) is not visible. There is extremely great rivalry in the market of storage media, which forces manufacturers of equipment to work with a minimum margin, while allocating huge funds to develop radically new ways of storing information is an inadmissible luxury. All scientific progress is aimed at more local tasks - the modernization of existing solutions, as a result, there is no need to talk about the revolution of data carriers in the foreseeable future.
Obviously, the progress of technology does not stand still and of course we will live to see the day when data storage technologies that are substantially excellent from existing ones will be introduced everywhere. Will it be a holographic or polymer memory using a phase transition or it will be FeRAM-based samples now it is not clear. One thing is clear that this is the whole prospect of the next decade, because you should not forget that from the appearance of even the most successful development it must take at least several years before it triumphantly appears on store shelves. Accordingly, all this time we will observe solutions that are already familiar to us.

Although it is difficult, with absolute certainty, in our hyperactive time to assert where the whole industry will be in 10 years. At the same time, based on the decades preceding, something can be foreseen with full confidence. With or without revolutions, the future of data carriers will move in a single key: faster, cheaper, safer, more capacious, where this path will take us, time will tell.

Hard disks
Manufacturers of classic "hard drives" continue to invest heavily in the development of this technology, squeezing more and more productivity out of it. Starting from the very first hard drive weighing 5 tons and weighing 5 MB, the IBM 350, after 60 years, disks that fit easily in the palm of your hand are already ready for mass production, having an impressive 10 TB on board. The most advanced Shigled Magnetic Recording (SMR) technology, according to which the first announced 10 TB hard drive will be produced, has a significant growth potential for the volume of data being posted, which will make it possible to get “hard drives” up to 20 TB in the coming years.

This technology is very progressive, its application has allowed more efficient use of the area of the metal plates themselves located inside the hard drive. Instead of wasting the valuable surface of the metal disks under the delimiting elements separating the recording sectors, it was decided to go a different way of forming the plate itself. Due to the layering of many balls, the material that will be rewritable, it was possible to increase the efficiency of writing to discs by a quarter, while the production price of such a “tiled” structure increased slightly. But having eliminated the boundaries of the sectors, engineers were faced with the problem of a significant drop in the speed of information processing by the drive, which was yet to be fought.
Speaking about the SMR technology, one can not help but recall another technical trick, after the application of which it really became commercially successful. It was possible to solve the problem with insufficient read / write speed of the "tiled" surface structure of the discs thanks to another advanced development - Two-Dimensional Magnetic Recording (TDMR). The two-dimensional magnetic recording system has eliminated the problem of smearing of the magnetic signal deposited on the surface of the disk. The fact is that earlier, having eliminated the clear boundaries between the adjacent recorded sectors of sectors on the disk, with their partial overlapping, the head of the reading mechanism needed to spend more time getting an unambiguous result in the determined magnetized sector. The solution was to use several read heads. By expanding the area of disk magnetization registration, it became possible to obtain more detailed information about a specific sector. After mathematical processing of a more complete image of the captured data, the engineers eliminated the "magnetic noise" from neighboring areas and get a clear result in an acceptable amount of time.

The third technological challenge for HDD was the development of a method of recording data with pre-heating of a sector that can be recorded on a medium, this method is called Heat-Assisted Magnetic Recording (HAMR). The recording technique itself provides for the placement of new “hard drives” of a laser on the writing heads, which will, before direct magnetization, heat the metal surface. Thanks to this specific recording method, as a result, the engineers managed to increase the clarity of the magnetization, which also helped to get rid of excess noise and increase the concentration of the recording. The mass production of data carriers using the HAMR principle was previously planned by Seagate in 2015, but recently this date has been moved to 2017.

Meanwhile, also, one of the leaders among manufacturers of data carriers - the Hitachi company, took a slightly different path. Replacing the familiar air with helium in the sealed housing of the carrier, the technicians were able to significantly reduce the viscosity of the medium, and this made it possible to place metal disks closer to one than it was available ever before. The result of this decision was the increased capacity of the entire medium, with preserved external dimensions.

All the latest innovations make storage media - HDD is very competitive in the IT market. But the problem is increasingly being raised by this technology. The initial principle underlying the HDD - the magnetic recording of information on plates, has actually exhausted itself entirely. Even the “tiled” arrangement of rewritable sectors and all the tricks associated with it are unlikely to be able to successfully resist the more accessible and productive flash technology in three to four years.
Flash
Over the past two decades, we have been witnessing an ever-growing process of developing flash memory. The development of this technology is truly similar to a bright flash of light. Flocking with many streams - technologies, from different areas of technological progress, into a single channel Solid-State Drive (SSD), the result today has become impressive.

Companies such as Intel and Samsung anticipate big dividends from those invested in SSD-related 3D NAND technology. Thanks to this development, engineers were able to compose flash memory crystals not only horizontally but also vertically, that is, to form bulk structures from semiconductors. Already, there is information about the existence of test samples created using 3D NAND technology from Samsung, whose chips can have up to 24 layers. Intel with its partner Micron predicts the release of chips up to 48 GB in late 2015 - early 2016. The proposed chip should be created using 32-deep 3D NAND technology with a multicellular system (MLC), which allows you to double the amount of information carrying one single semiconductor. The successes of Micron engineers in the development of MLC, give its employees the opportunity to claim that very soon they will be able to launch 1 TB SSD drives, which will become extremely compact. In addition to the advantages of compactness, according to the forecasts of the Intel subsidiary, the price will also be quite affordable, by 2018, SSD memory, in the super-bright segment, it will fall every 5 times.

Another technology that should ripen in 2015 is a 3-level flash structure - TLC. When it comes to flash memory, a memory cell is usually represented as an object with or without an electric charge, which in the usual sense is equivalent to a binary code 1 / 0. Innovative engineers looked at the matter from a slightly different angle, casting aside the established interpretation and wondering, and what if we start to consider not just the presence of a charge in the cell, but to measure the voltage range itself, which it carries? Equating a specific voltage value to unity, and any other range to zero, a binary system of calculation, one can also easily encode data carrier cells. A correctly posed question has borne fruit. Due to the fact that the materials used support a wide range of stresses,

MLC - this multi-level structure has four voltage ranges, which correspond to binary coding 00/01/10/11 - thereby actually being the equivalent of two first-level cells, that is, MLC technology allows you to double the recording density. Remembering the advantages of technology, it is impossible not to recall its disadvantages. The creation of chips in this way is fraught with various factors: additional costs in production, the process of writing data reading with this layout should be as accurate as possible. And of course, accelerated degradation of memory cells, as a result of which the product durability will go down, and the number of sector errors that arise will increase.
TLC - is able to operate with eight voltage levels, combining combinations of 1/0 at all three levels. This technology is able to give an increase in the information placed on the media 50% more than the MLC. The problem here lies in the fact that the structure of the carrier itself undergoes more dramatic structural changes than in the MLC. A more complex design of the product carries a significantly higher cost, and in order for such a solution to take its place in the market alongside competitors, at least five years must pass.

At the moment, samples of drives created on the basis of the combination of NAND 3D and TLC technologies are already on sale. A typical example is the 1TB Samsung 850 EVO SSD. The speed of writing / reading media is about 530 MB / s, while the number of IOPS is more than 90 thousand. Combining a large volume, performance, acceptable dimensions, reliability (the manufacturer’s warranty on the product is 5 years), the price of an SSD miracle reaches $ 500.
Experts expect that in 2015, storage media manufacturers will focus on improving and optimizing TLC technology. The main points of improvement for TLC will be the ways of using chips and the struggle to reduce errors that occur during the reading / writing of cells. One of the promising ways to eliminate errors can be a chip developed by Silicon Motion engineers, which has three logical levels of suppression of arising ambiguities.
The first level of Low Density Parity Check (LDPC) is a specially developed method of data encoding thanks to which it will be possible to eliminate many errors at the very first stage. Thanks to the mathematical algorithm of data processing, it will be possible to guaranteedly detect and eliminate write errors without any significant loss in performance. The logic of the LDPC system itself was developed back in the 1960s, but due to the weakness of technical means, it was not possible to fully realize its potential. In the 1990s, when the volume of processed data reached a certain, critical level, the finest hour came for LDPC. Finding itself in WIFI networks, 10 GB networks, digital television - the algorithm continued its service for the benefit of SSD.
In addition to encoding data, LDPC can also become a tool for tracking and adjusting the voltage of the electric charge of memory arrays, for more efficient functioning of the TLC. The electrical properties of semiconductors, and the arrays formed by them, undergo some changes over time. Changes can be both short-term, associated with temperature, and long-term, associated with the degradation of the material. Based on statistical information, the LDPC algorithm helps to minimize errors for the above reasons.
This layering of technology allows you to compensate for most of the disadvantages of TLC technology and makes it more cost-effective. The biggest unsolved issue of TLC-enabled SSDs is still the short life of the recording cells. Given the pros and cons, such discs have become highly demanded by consumers that they use this kind of memory for information that is often accessed and does not undergo any special changes.

Storage Prospects for the Foreseeable Future
At the moment, it is difficult to see a potential competitor to a combination of flash memory and hard drives. Moreover, even serious work aimed at finding this competitor (s) is not visible. There is extremely great rivalry in the market of storage media, which forces manufacturers of equipment to work with a minimum margin, while allocating huge funds to develop radically new ways of storing information is an inadmissible luxury. All scientific progress is aimed at more local tasks - the modernization of existing solutions, as a result, there is no need to talk about the revolution of data carriers in the foreseeable future.
Obviously, the progress of technology does not stand still and of course we will live to see the day when data storage technologies that are substantially excellent from existing ones will be introduced everywhere. Will it be a holographic or polymer memory using a phase transition or it will be FeRAM-based samples now it is not clear. One thing is clear that this is the whole prospect of the next decade, because you should not forget that from the appearance of even the most successful development it must take at least several years before it triumphantly appears on store shelves. Accordingly, all this time we will observe solutions that are already familiar to us.

Although it is difficult, with absolute certainty, in our hyperactive time to assert where the whole industry will be in 10 years. At the same time, based on the decades preceding, something can be foreseen with full confidence. With or without revolutions, the future of data carriers will move in a single key: faster, cheaper, safer, more capacious, where this path will take us, time will tell.