Storage Evolution
“Let you live in an era of change” is a very concise and understandable curse for a person, say, over 30 years old. The modern stage of the development of mankind has made us unwitting witnesses of a unique “era of changes”. And here, even the scale of modern scientific progress plays a role, the significance for civilization of the transition from stone tools to copper was obviously much more significant than doubling the computational capabilities of the processor, which in itself will be clearly more technological. That huge, ever-increasing rate of change in the technological development of the world is simply discouraging. If a hundred years ago every self-respecting gentleman simply had to be aware of all the “new products” of the world of science and technology, so as not to look like a fool and a country bumpkin in the eyes of his entourage, now, given the volume and speed of generation of these “new products,” it is simply impossible to completely monitor them, even the question is not posed that way. Inflation of technologies, which until recently were not conceivable, and the possibilities of man connected with them, actually killed an excellent direction in literature - “Technical Fiction”. There is no longer any need for it, the future has become much closer than ever before, a conceived story about “wonderful technology” risks reaching the reader later than something similar will already go off the assembly lines of the research institute.
The progress of a person’s technical thought has always been most quickly displayed precisely in the field of information technology. The methods of collecting, storing, systematizing, disseminating information are a red thread through the entire history of mankind. Breakthroughs, whether in the field of technical or humanities, one way or another, responded to IT. The civilization path traveled by mankind is a series of successive steps to improve the methods of storing and transmitting data. In this article, we will try to understand and analyze in more detail the main stages in the process of developing information carriers, to conduct their comparative analysis, starting from the most primitive - clay tablets, up to the latest successes in creating a machine-brain interface.

The task posed is really not comic, you look at what you swung at, the intrigued reader will say. It would seem, how is it possible, if at least elementary correctness is observed, to compare the essentially differing technologies of the past and today? The fact that the ways of perceiving information by a person is actually not strong and have undergone a change. The recording forms and forms for reading information by means of sounds, images and coded symbols (letters) remained the same. In many ways, this reality has become, so to speak, a common denominator, thanks to which it will be possible to make qualitative comparisons.
To begin with, it is worth reviving the truths that we will continue to operate on. The elementary information carrier of the binary system is “bit”, while the minimum unit of storage and processing by the computer is “byte” in a standard form, the latter includes 8 bits. More habitual for our hearing megabytes corresponds to: 1 MB = 1024 KB = 1048576 bytes.
The given units at the moment are universal measures of the amount of digital data placed on a particular medium, so it will be very easy to use them in further work. Universality consists in the fact that with a group of bits, in fact, a cluster of numbers, a set of 1/0 values, any material phenomenon can be described and thereby digitized. It doesn’t matter if it is the most sophisticated font, picture, melody, all these things consist of separate components, each of which is assigned its own unique digital code. Understanding this basic principle makes our progress possible.
The very evolutionary formation of our species threw people into the embrace of analog perception of the space surrounding them, which in many respects predetermined the fate of our technological formation.

At the first glance of modern man, the technologies that originated at the very dawn of mankind are very primitive, the very existence of mankind before the transition to the era of “numbers” may seem to be not sophisticated in detail, but is it so, was it such a “childhood”? Asked by the study of this question, we can contemplate the rather unpretentious technologies of methods for storing and processing information at the stage of their appearance. The first of its kind information carrier created by man was portable areal objects with images printed on them. Plates and parchments made it possible not only to save, but also more efficiently than ever before, to process this information. At this stage, the opportunity appeared to concentrate a huge amount of information in specially designated places - repositories,

The first known data centers, as we call them now, until recently called libraries, arose in the Middle East, between the rivers Nile and Euphrates, about II thousand years BC. The format of the information carrier itself all this time has significantly determined the ways of interacting with it. And here it’s not so important anymore, a clay tablet, a papyrus scroll, or a standard A4 pulp and paper sheet, all these thousands of years have been closely combined by the analog way of entering and reading data from the medium.

The period of time over which it was the analogue way of interaction between a person and his informational belongings that successfully dominated the flesh to our days, only very recently, already in the 21st century, finally giving way to the digital format.
Having outlined the approximate temporal and semantic framework of the analogue stage of our civilization, we can now return to the question posed at the beginning of this section, since they are not efficient data storage methods that we had until very recently, without knowing about the iPad, flash drives and optical discs?
If you discard the last stage of the decline in analog storage technology, which lasted for the last 30 years, we can regret to note that these technologies, by and large, have not undergone significant changes for thousands of years. Indeed, a breakthrough in this area began relatively recently, this is the end of the nineteenth century, but more on that below. Until the middle of the declared century, among the main ways of recording data, two main ones can be distinguished, this is writing and painting. A significant difference between these methods of information registration, completely independent of the medium on which it is carried, lies in the logic of information registration.
Painting seems to be the simplest way of transmitting data that does not require any additional knowledge, both at the stage of creation and use of data, thereby actually being the initial format perceived by a person. The more accurately the transmission of reflected light from the surface of surrounding objects to the retina of the scribe’s eye goes to the surface of the canvas, the more informative this image will be. The lack of thoroughness of the transmission technique and the materials used by the image creator are the noise that will interfere in the future for the accurate reading of information registered in this way.

How informative is the image, what is the quantitative value of the information carried by the figure. At this stage of understanding the process of transmitting information in a graphical way, we can finally plunge into the first calculations. In this, a basic computer science course will come to our aid.
Any raster image is discrete, it’s just a set of points. Knowing this property of him, we can translate the displayed information that it carries into units that are understandable to us. Since the presence / absence of a contrast point is actually the simplest binary code 1/0, then, and therefore, each point acquires 1 bit of information. In turn, the image of a group of points, say 100x100, will contain:
V = K * I = 100 x 100 x 1 bit = 10,000 bits / 8 bits = 1250 bytes / 1024 = 1.22 kbytes
But let's not forget that the above calculation is correct only for a monochrome image. In the case of much more frequently used color images, of course, the amount of transmitted information will increase significantly. If we accept a 24-bit (photographic quality) encoding as a condition of sufficient color depth, and it, I recall, has support for 16,777,216 colors, therefore we get a much larger amount of data for the same number of points:
V = K * I = 100 x 100 x 24 bits = 240,000 bits / 8 bits = 30,000 bytes / 1024 = 29.30 kbytes
As you know, a point has no size and, in theory, any area allotted for applying an image can carry an infinitely large amount of information. In practice, there are quite certain sizes and, accordingly, you can determine the amount of data.
On the basis of many studies, it was found that a person with average visual acuity, with a distance convenient for reading information (30 cm), can distinguish about 188 lines per 1 centimeter, which in modern technology approximately corresponds to the standard image scanning parameter of household scanners at 600 dpi . Consequently, from one square centimeter of the plane, without additional devices, the average person can count 188: 188 points, which will be equivalent:
For a monochrome image:
Vm = K * I = 188 x 188 x 1 bit = 35 344 bit / 8 bit = 4418 bytes / 1024 = 4.31 kbytes
For photo-quality images:
Vc = K * I = 188 x 188 x 24 bits = 848 256 bits / 8 bits = 106,032 bytes / 1024 = 103.55 kbytes
For greater clarity, on the basis of the calculations, we can easily determine how much information carries such a familiar leaflet of a format as A4 with dimensions of 29.7 / 21 cm:
VA4 = L1 x L2 x Vm = 29.7 cm x 21 cm x 4.31 kbytes = 2688.15 / 1024 = 2.62 MB - monochrome image
VА4 = L1 x L2 x Vm = 29.7 cm x 21 cm x 103.55 KB = 64584.14 / 1024 = 63.07 MB - color picture
If the "picture" is more or less clear with fine art, then writing is not so simple. The obvious differences in the way information is transmitted between the text and the picture dictate a different approach in determining the information content of these forms. Unlike an image, writing is a type of standardized, encoded data transmission. Without knowing the code of words embedded in the letter and the letters that form them, the informative load, say of the Sumerian cuneiform writing, is equal to zero for most of us, while ancient images on the ruins of the same Babylon will be correctly perceived even by a person who is completely ignorant of the intricacies of the ancient world . It becomes quite obvious that the information content of the text extremely depends on whose hands it fell into, on the interpretation of it by a specific person.

Nevertheless, even under such circumstances, somewhat eroding the justice of our approach, we can quite unambiguously calculate the amount of information that was placed in the texts on various kinds of flat surfaces.
Having resorted to the binary coding system already familiar to us and the standard byte, the written text, which can be imagined as a set of letters forming words and sentences, is very easy to digitize 1 / 0. The
usual 8-bit byte for us can take up to 256 different digital combinations, which should be enough for a digital description of any existing alphabet, as well as numbers and punctuation marks. Otsyudova begs the conclusion that any plotted standard character of an alphabetical letter on the surface takes 1 byte in digital equivalent.
The situation is a little different with the hieroglyphs, which have also been widely used for several thousand years. Replacing the whole word with one character, this encoding obviously uses the plane assigned to it from the point of view of the information load much more efficiently than this happens in languages based on the alphabet. At the same time, the number of unique characters, each of which needs to be assigned a non-repeated combination of combinations 1 and 0, is many times greater. In the most common existing hieroglyphic languages: Chinese and Japanese, according to statistics, no more than 50,000 unique characters are actually used, in Japanese and even less, at the moment, the country's Ministry of Education, for everyday use, has identified a total of 1850 hieroglyphs. In any case, 256 combinations fit in one byte can not do here. One byte is good

The current practice of using letters tells us that on a standard A4 sheet you can place about 1800 readable, unique characters. After simple arithmetic calculations it is possible to establish how much in digital terms one standard typewritten leaflet of alphabetical and more informative hieroglyphic letters will carry information:
V = n * I = 1800 * 1 byte = 1800/1024 = 1.76 kbytes or 2.89 bytes / cm2
V = n * I = 1800 * 2 bytes = 3600/1024 = 3.52 kbytes or 5.78 bytes / cm2
The XIX century was a turning point, both for the methods of recording and storage of analog data, this was the result of the emergence of revolutionary materials and methods of recording information that were to change the IT world. One of the main innovations was the sound recording technology.

The invention of the phonograph by Thomas Edison first created the existence of cylinders with grooves deposited on them, and soon the first prototypes of optical discs.
Responding to sound vibrations, the phonograph cutter tirelessly made grooves on the surface of both metal and polymer later. Depending on the trapped vibration, the cutter applied a twisted groove of different depth and width to the material, which in turn made it possible to record sound and reproduce it purely mechanically, once engraved sound vibrations.
At the presentation of the first phonograph by T. Edison at the Paris Academy of Sciences, there was an embarrassment, one not a young, linguistic scientist, having barely heard a reproduction of human speech with a mechanical device, broke loose and the indignant rushed with his fists at the inventor, accusing him of fraud. According to this respected member of the academy, metal would never be able to repeat the melodies of the human voice, and Edison himself is an ordinary ventriloquist. But we all know that this is certainly not the case. Moreover, in the twentieth century, people learned to store sound recordings in digital format, and now we will plunge into some numbers, after which it will become quite clear how much information fits on a regular vinyl record (the material has become the most characteristic and mass representative of this technology) record.

In the same way as before with the image, here we will build on human abilities to capture information. It is widely known that most often the human ear is able to perceive sound vibrations from 20 to 20,000 Hertz, on the basis of this constant, a value of 44100 Hertz was adopted to switch to the digital sound format, since for the correct transition, the sampling frequency of the sound vibration should be two times its original value. Also, an important factor here is the coding depth of each of the 44100 oscillations. This parameter directly affects the number of bits inherent in one wave, the greater the position of the sound wave recorded in a particular second of time, the more bits it must be encoded and the more digitized sound will sound. The ratio of sound parameters, The one chosen for the most widespread format today, not distorted by the compression used on audio discs, is its 16 bit depth, with a resolution of 44.1 kHz. Although there are more “capacious” ratios of the given parameters, up to 32bit / 192 kHz, which could be more comparable with the actual sound quality of the recording grams, we will include the ratio of 16 bits / 44.1 kHz in the calculations. It was the chosen ratio in the 80-90s of the twentieth century that dealt a crushing blow to the analog audio recording industry, becoming in fact a full-fledged alternative to it. which could be more comparable with the actual sound quality of the recording grams, but we will include the ratio of 16 bits / 44.1 kHz in the calculations. It was the chosen ratio in the 80-90s of the twentieth century that dealt a crushing blow to the analog audio recording industry, becoming in fact a full-fledged alternative to it. which could be more comparable with the actual sound quality of the recording grams, but we will include the ratio of 16 bits / 44.1 kHz in the calculations. It was the chosen ratio in the 80-90s of the twentieth century that dealt a crushing blow to the analog audio recording industry, becoming in fact a full-fledged alternative to it.
And so, taking the announced parameters as the initial sound parameters, we can calculate the digital equivalent of the amount of analog information that the recording technology carries:
V = f * I = 44100 Hertz * 16 bits = 705600 bps / 8 = 8820 bytes / s / 1024 = 86.13 kB / s
By calculation, we received the necessary amount of information to encode 1 second of sound quality recording. Since the sizes of the plates varied, just like the density of the grooves on its surface, the amount of information on specific representatives of such a carrier also differed significantly. The maximum time for high-quality recording on a vinyl record with a diameter of 30 cm was less than 30 minutes on one side, which was on the verge of the possibilities of the material, but usually this value did not exceed 20-22 minutes. Having this characteristic, it follows that the vinyl surface could accommodate:
Vv = V * t = 86.13 kbytes / sec * 60 sec * 30 = 155034 kbytes / 1024 = 151.40 MB
and in fact it was placed no more:
Vvf = 86.13 kbytes / sec * 60 sec * 22 = 113691.6 kb / 1024 = 111.03 mb
The total area of such a plate was:
S = π * r ^ 2 = 3.14 * 15 cm * 15 cm = 706.50 cm2
In fact, 160.93 kbytes of information per square centimeter of the plate, of course, the proportion for different diameters will not vary linearly, since here it is taken not the effective recording area, but the entire medium.
The last and perhaps the most effective carrier of data applied and read by analog methods was magnetic tape. The tape is actually the only medium that has quite successfully survived the analog era.

The technology of recording information by magnetization was patented at the end of the nineteenth century by the Danish physicist Voldemar Poultsen, but unfortunately, then it did not become widespread. For the first time, technology on an industrial scale was used only in 1935 by German engineers, the first film tape recorder was created on its basis. Over 80 years of its active use, magnetic tape has undergone significant changes. Different materials, different geometric parameters of the tape itself were used, but all these improvements were based on a single principle, developed by Poultsen in 1898, of magnetic recording of vibrations.
One of the most widely used formats was a tape consisting of a flexible base on which one of the metal oxides was applied (iron, chromium, cobalt). The width of the tape used in household audio tape recorders was usually one inch (2.54 cm), the thickness of the tape began from 10 microns, as for the length of the tape, it varied significantly in different skeins and most often ranged from hundreds of meters to a thousand. For example, a reel with a diameter of 30 cm could fit about 1000 m of tape.
The sound quality depended on many parameters, both the tape itself and the equipment reading it, but in general, with the right combination of these same parameters, it was possible to make high-quality studio recordings on magnetic tape. Higher sound quality was achieved by using more tape to record a unit of time of sound. Naturally, the more tape is used to record the moment of sound, the wider the spectrum of frequencies managed to be transferred to the carrier. For studio, high-quality materials, the speed of recording onto tape was at least 38.1 cm / sec. When listening to records in everyday life, for a sufficiently complete sound, a recording made at a speed of 19 cm / sec was enough. As a result, up to 45 minutes of studio sound, or up to 90 minutes acceptable for the bulk of consumers, could fit on a 1000 m reel, content. In cases of technical recordings or speeches for which the width of the frequency range during playback did not play a special role, with a tape consumption of 1.19 cm / sec on the aforementioned reel, it was possible to record sounds as much as 24 hours.
Having a general idea of the technology of recording on magnetic tape in the second half of the twentieth century, it is possible to more or less correctly convert the capacity of bobbin carriers into units of measurement of data volume that we understand, as we already did for gramophone recording.
A square centimeter of such a medium will contain:
Vo = V / (S * n) = 86.13 kB / s / (2.54 cm * 1 cm * 19) = 1.78 KB / cm2
Total coil volume with 1000 meters of film:
Vh = V * t = 86.13 kB / s * 60 s * 90 = 465 102 kB / 1024 = 454.20 MB
Do not forget that the specific footage of the tape in the reel was very different, it depended, first of all, on the diameter of the reel and the thickness of the tape. Quite widespread, as a result of acceptable dimensions, bobbins were widely used, containing 500 ... 750 meters of film, which for an ordinary music lover was the equivalent of an hour-long sound, which was quite enough to keep track of an average music album.

The life of video cassettes, which used the same principle of recording an analog signal on magnetic tape, was rather short, but no less bright. By the time this technology was used in industry, the recording density on magnetic tape had dramatically increased. 180 minutes of video material with a very dubious quality, as of today, fit on a half-inch film 259.4 meters long. The first video formats gave a picture at the level of 352x288 lines, the best samples showed the result at the level of 352x576 lines. In terms of bitrate, the most advanced recording playback methods made it possible to approach the value of 3060 kbit / s, with a speed of reading information from the tape at 2.339 cm / s. A standard three-hour cassette could fit around 1724.74 MB, which is generally not so bad,
The appearance and widespread adoption of numbers (binary coding) is entirely due to the twentieth century. Although the philosophy of coding with binary code 1/0, Yes / No, somehow hovered among mankind at different times and on different continents, sometimes gaining the most amazing forms, it finally materialized in 1937. A student at the University of Massachusetts Technological University, Claude Shannon, based on the work of the great British (Irish) mathematician George Boole, applied the principles of Boolean algebra to electrical circuits, which in fact was the starting point for cybernetics in the form in which we know it now.

In less than a hundred years, both the hardware and software components of digital technology have undergone a huge number of major changes. The same is true for the media. Starting from super inefficient - paper digital media, we have come to super efficient - solid state storage. In general, the second half of the last century passed under the banner of experiments and the search for new forms of media, which can be succinctly called a universal mess of the format.
Punch cards have become perhaps the first step on the path of computer-human interaction. Such communication lasted quite a long time, sometimes even now this medium can be found in specific research institutes scattered across the CIS.

One of the most common punch card formats was the IBM format introduced back in 1928. This format has become basic for Soviet industry. According to GOST, the dimensions of such a punch card were 18.74 x 8.25 cm. It could accommodate no more than 80 bytes on a punch card, only 0.52 bytes per 1 cm2. In this calculation, for example, 1 gigabyte of data would be equal to approximately 861.52 hectares of punched cards, and the weight of one such gigabyte was a little less than 22 tons.
In 1951, the first samples of data carriers based on the technology of pulsed magnetization of the tape specifically for registering “numbers” onto it were released. This technology made it possible to add up to 50 characters per centimeter of a half-inch metal tape. In the future, the technology was seriously improved, allowing many times to increase the number of unit values per unit area, as well as reduce the cost of the material of the carrier itself.

At the moment, according to the latest statements of Sony Corporation, their nano-development allows you to place on 1 cm2 the amount of information is 23 Gigabytes. Such ratios of numbers suggest that this magnetic tape recording technology has not outlived itself and has rather bright prospects for further operation.
Probably the most amazing method of storing digital data, but only at first glance. The idea of recording an existing program on a thin vinyl layer arose in 1976 at Processor Technology, which was based in Kansas City, USA. The essence of the idea was to reduce the cost of the information carrier as much as possible. Employees of the company took an audio tape with recorded data in the existing Kansas City Standard sound format and transferred it to vinyl. In addition to reducing the cost of the medium, this solution made it possible to file the engraved record with a regular magazine, which allowed the small distribution of programs.

In May 1977, magazine subscribers, for the first time, received a record in their issue, which housed the 4K BASIC interpreter for the Motorola 6800 processor. The record lasted for 6 minutes.
This technology, for obvious reasons, did not take root, officially, the last disc, so called Floppy-Rom, was released in September 1978, this was its fifth release.
The first hard drive was introduced by IBM in 1956, the IBM 350 came with the company's first mass computer. The total weight of such a “hard disk” was 971 kg. In size, it was akin to a closet. It housed 50 disks, the diameter of which was 61 cm. The total amount of information that could fit on this "hard drive" was a modest 3.5 megabytes.

The technology of data recording itself was, so to speak, derived from gramophone recording and magnetic tapes. The disks placed inside the case kept a lot of magnetic pulses on themselves, which were introduced onto them and read out by the movable head of the recorder. Like a gramophone top, at every moment of time, the recorder moved across the area of each of the disks, gaining access to the necessary cell, which carried a magnetic vector of a certain direction.

At the moment, the aforementioned technology is also alive and, moreover, is actively developing. Less than a year ago, Western Digital released the world's first 10 TB hard drive. In the middle of the case there were 7 plates, and instead of air, helium was pumped into the middle of it.
They owe their appearance to the partnership of two corporations Sony and Philips. The optical disc was presented in 1982 as a suitable, digital alternative to analog audio media. With a diameter of 12 cm, the first samples could accommodate up to 650 MB, which with a sound quality of 16 bit / 44.1 kHz, amounted to 74 minutes of sound, and this value was not chosen in vain. Exactly 74 minutes lasts Beethoven’s 9th symphony, which was fondly loved by one of the co-owners of Sony, one of the developers from Philips, and now it could completely fit on one disc.
The technology of the process of applying and reading information is very simple. On the mirror surface of the disk, recesses are burned out, which, when reading information, optically, are unambiguously recorded as 1/0.

Optical storage technology is also booming in our 2015 year. The technology known to us as a Blu-ray disc with four-layer recording holds on its surface about 111.7 Gigabytes of data, at its not too high price, being ideal carriers for very “capacious” high-resolution films with deep color reproduction.
All this is the brainchild of one technology. Developed back in the 1950s, the principle of recording data based on the registration of an electric charge in an isolated region of a semiconductor structure. For a long time he did not find his practical implementation to create a full-fledged information carrier on its basis. The main reason for this was the large dimensions of the transistors, which at the maximum possible concentration could not give rise to a competitive product on the data carrier market. They remembered the technology and periodically tried to introduce it throughout the 70s and 80s.

Indeed, the finest hour for SSDs has come from the late 80s, when semiconductor sizes began to reach acceptable sizes. Japanese Toshiba in 1989 presented a completely new type of memory "Flash", from the word "Flash". This word itself very well symbolized the main pros and cons of carriers implemented on the principles of this technology. The previously unprecedented speed of data access, a rather limited number of rewriting cycles and the need for an internal power supply for some of these media.
To this day, media manufacturers have achieved the largest concentration of memory due to the SDCX card standard. With dimensions of 24 x 32 x 2.1 mm, they can support up to 2 TB of data.
All the carriers that we dealt with up to this point were from a world of non-living nature, but let's not forget that the very first store of information that we all dealt with is the human brain.

The principles of the functioning of the nervous system in general terms are already clear today. And no matter how surprising it may sound, the physical principles of the brain are quite comparable with the principles of organizing modern computers.
A neuron is a structurally functional unit of the nervous system, it forms our brain. A microscopic cell, of a very complex structure, which is actually an analog of the transistor familiar to us. The interaction between neurons occurs due to various signals that propagate with the help of ions, which in turn generate electric charges, thus creating an unusual circuit.
But even more interesting is the principle of operation of a neuron, like its silicon counterpart, this structure fluctuates on the binary position of its state. For example, in microprocessors, the difference in voltage levels is taken as conditional 1/0, the neuron, in turn, has a potential difference, in fact, it can acquire one or two possible polarity values at any time: either “+” or “-”. A significant difference between a neuron and a transistor is in the boundary speed of the first one to acquire opposite values 1 / 0. As a result of its structural organization, which we will not go into too much detail, it is thousands of times more inert from its silicon counterpart, which naturally affects its speed - the number of processing requests per unit of time.

But not everything is so sad for living things, unlike a computer where processes are carried out in sequential mode, billions of neurons eaten into the brain solve tasks in parallel, which gives a number of advantages. Millions of these low-frequency processors quite successfully make it possible, in particular for humans, to interact with the environment.

Having studied the structure of the human brain, the scientific community came to the conclusion - in fact, the brain is an integral structure, which already includes a computing processor, instant memory, and long-term memory. Due to the very neural structure of the brain, there are no clear, physical boundaries between these hardware components, only blurred zones of specification. This statement is confirmed by dozens of precedents from life when, due to certain circumstances, people were removed part of the brain, up to half of the total volume. Patients after such interventions, except that they did not turn into a “vegetable,” in some cases, over time, restored all their functions and happily lived to a very old age, thereby being a living proof of the depth of flexibility and perfection of our brain.

Returning to the topic of the article, we can come to an interesting conclusion: the structure of the human brain is actually similar to the solid-state storage of information, which was discussed just above. After such a comparison, remembering all its simplifications, we may wonder how much data can be placed in this repository in this case? It may be surprising again, but we can get a very definite answer, let’s make the calculation.
As a result of scientific experiments carried out in 2009 by a neurobiologist, a doctor at the Brazilian University in Rio De Janeiro - Susanne Herculano-Hausel, it was found that in the average human brain, weighing about one and a half kilograms, about 86 billion neurons can be counted, I recall, previously scientists believed that this figure for the average value equals 100 billion neurons. Based on these numbers and equating each individual neuron to actually one bit, we get:
V = 86,000,000,000 bits / (1024 * 1024 * 1024) = 80.09 gigabytes / 8 = 10.01 gigabytes
Is it a lot or a little, and how much competitor can this information storage medium be? It’s very difficult to say so far. Every year the scientific community pleases us more with progress in studying the nervous system of living organisms. You can even find references to the artificial introduction of information into the memory of mammals. But by and large, the secrets of brain thinking are still a mystery to us.

Although far from all types of data carriers were presented in the article, of which there are a great many, the most characteristic representatives found a place in it. Summing up the material presented, we can clearly trace the pattern - the entire history of the development of data carriers is based on the heredity of the stages preceding the current moment. The progress of the past 25 years in the field of storage media is based on the experience gained from at least the last 100 ... 150 years, while the growth rate of storage capacity over the past quarter century increases exponentially, which is a unique case throughout the history of mankind that we know.
Despite the seemingly archaic nature of analog data recording, up to the end of the 20th century it was a completely competitive method of working with information. An album with high-quality images could contain gigabytes of the digital equivalent of data, which until the early 1990s was simply physically impossible to place on an equally compact medium, not to mention the lack of acceptable methods of working with such data arrays.
The first sprouts of recording on optical discs and the rapid development of HDDs of the late 1980s, in just one decade, broke the competition of many formats of analog recordings. Although the first musical optical discs did not differ qualitatively from the same vinyl records, having 74 minutes of recording versus 50-60 (two-way recording), the compactness, versatility and further development of the digital direction are expected, finally buried the analog format for mass use.
A new era of information carriers, on the threshold of which we are standing, can significantly affect the world in which we will be in 10 ... 20 years. Already, advanced work in bioengineering gives us the opportunity to superficially understand the principles of operation of neural networks, to manage certain processes in them. Although the potential for placing data on structures similar to the human brain is not so great, there are things that should not be forgotten. The functioning of the nervous system itself is still rather mysterious, as a result of its little knowledge. The principles of placing and storing data in it, at a first approximation, it is obvious that they act according to a slightly different law than this will be true for the analog and digital method of processing information. As with the transition from the analogue stage of human development to the digital one, in the transition to the era of the development of biological materials, the two previous stages will serve as the foundation, a kind of catalyst for the next leap. The need for activization in the bioengineering direction was obvious earlier, but only now the technological level of human civilization has risen to the level where such work can really succeed. Whether this new stage in the development of IT technologies will swallow the previous stage, as we already had the honor, is to observe, or will it go in parallel, it is too early to predict, but the fact that it will radically change our life is obvious. when such work can really succeed. Whether this new stage in the development of IT technologies will swallow the previous stage, as we already had the honor, is to observe, or will it go in parallel, it is too early to predict, but the fact that it will radically change our life is obvious. when such work can really succeed. Whether this new stage in the development of IT technologies will swallow the previous stage, as we already had the honor, is to observe, or will it go in parallel, it is too early to predict, but the fact that it will radically change our life is obvious.

The progress of a person’s technical thought has always been most quickly displayed precisely in the field of information technology. The methods of collecting, storing, systematizing, disseminating information are a red thread through the entire history of mankind. Breakthroughs, whether in the field of technical or humanities, one way or another, responded to IT. The civilization path traveled by mankind is a series of successive steps to improve the methods of storing and transmitting data. In this article, we will try to understand and analyze in more detail the main stages in the process of developing information carriers, to conduct their comparative analysis, starting from the most primitive - clay tablets, up to the latest successes in creating a machine-brain interface.

The task posed is really not comic, you look at what you swung at, the intrigued reader will say. It would seem, how is it possible, if at least elementary correctness is observed, to compare the essentially differing technologies of the past and today? The fact that the ways of perceiving information by a person is actually not strong and have undergone a change. The recording forms and forms for reading information by means of sounds, images and coded symbols (letters) remained the same. In many ways, this reality has become, so to speak, a common denominator, thanks to which it will be possible to make qualitative comparisons.
Methodology
To begin with, it is worth reviving the truths that we will continue to operate on. The elementary information carrier of the binary system is “bit”, while the minimum unit of storage and processing by the computer is “byte” in a standard form, the latter includes 8 bits. More habitual for our hearing megabytes corresponds to: 1 MB = 1024 KB = 1048576 bytes.
The given units at the moment are universal measures of the amount of digital data placed on a particular medium, so it will be very easy to use them in further work. Universality consists in the fact that with a group of bits, in fact, a cluster of numbers, a set of 1/0 values, any material phenomenon can be described and thereby digitized. It doesn’t matter if it is the most sophisticated font, picture, melody, all these things consist of separate components, each of which is assigned its own unique digital code. Understanding this basic principle makes our progress possible.
The hard, analog childhood of civilization
The very evolutionary formation of our species threw people into the embrace of analog perception of the space surrounding them, which in many respects predetermined the fate of our technological formation.

At the first glance of modern man, the technologies that originated at the very dawn of mankind are very primitive, the very existence of mankind before the transition to the era of “numbers” may seem to be not sophisticated in detail, but is it so, was it such a “childhood”? Asked by the study of this question, we can contemplate the rather unpretentious technologies of methods for storing and processing information at the stage of their appearance. The first of its kind information carrier created by man was portable areal objects with images printed on them. Plates and parchments made it possible not only to save, but also more efficiently than ever before, to process this information. At this stage, the opportunity appeared to concentrate a huge amount of information in specially designated places - repositories,

The first known data centers, as we call them now, until recently called libraries, arose in the Middle East, between the rivers Nile and Euphrates, about II thousand years BC. The format of the information carrier itself all this time has significantly determined the ways of interacting with it. And here it’s not so important anymore, a clay tablet, a papyrus scroll, or a standard A4 pulp and paper sheet, all these thousands of years have been closely combined by the analog way of entering and reading data from the medium.

The period of time over which it was the analogue way of interaction between a person and his informational belongings that successfully dominated the flesh to our days, only very recently, already in the 21st century, finally giving way to the digital format.
Having outlined the approximate temporal and semantic framework of the analogue stage of our civilization, we can now return to the question posed at the beginning of this section, since they are not efficient data storage methods that we had until very recently, without knowing about the iPad, flash drives and optical discs?
Let's make a calculation
If you discard the last stage of the decline in analog storage technology, which lasted for the last 30 years, we can regret to note that these technologies, by and large, have not undergone significant changes for thousands of years. Indeed, a breakthrough in this area began relatively recently, this is the end of the nineteenth century, but more on that below. Until the middle of the declared century, among the main ways of recording data, two main ones can be distinguished, this is writing and painting. A significant difference between these methods of information registration, completely independent of the medium on which it is carried, lies in the logic of information registration.
art
Painting seems to be the simplest way of transmitting data that does not require any additional knowledge, both at the stage of creation and use of data, thereby actually being the initial format perceived by a person. The more accurately the transmission of reflected light from the surface of surrounding objects to the retina of the scribe’s eye goes to the surface of the canvas, the more informative this image will be. The lack of thoroughness of the transmission technique and the materials used by the image creator are the noise that will interfere in the future for the accurate reading of information registered in this way.
How informative is the image, what is the quantitative value of the information carried by the figure. At this stage of understanding the process of transmitting information in a graphical way, we can finally plunge into the first calculations. In this, a basic computer science course will come to our aid.
Any raster image is discrete, it’s just a set of points. Knowing this property of him, we can translate the displayed information that it carries into units that are understandable to us. Since the presence / absence of a contrast point is actually the simplest binary code 1/0, then, and therefore, each point acquires 1 bit of information. In turn, the image of a group of points, say 100x100, will contain:
V = K * I = 100 x 100 x 1 bit = 10,000 bits / 8 bits = 1250 bytes / 1024 = 1.22 kbytes
But let's not forget that the above calculation is correct only for a monochrome image. In the case of much more frequently used color images, of course, the amount of transmitted information will increase significantly. If we accept a 24-bit (photographic quality) encoding as a condition of sufficient color depth, and it, I recall, has support for 16,777,216 colors, therefore we get a much larger amount of data for the same number of points:
V = K * I = 100 x 100 x 24 bits = 240,000 bits / 8 bits = 30,000 bytes / 1024 = 29.30 kbytes
As you know, a point has no size and, in theory, any area allotted for applying an image can carry an infinitely large amount of information. In practice, there are quite certain sizes and, accordingly, you can determine the amount of data.
On the basis of many studies, it was found that a person with average visual acuity, with a distance convenient for reading information (30 cm), can distinguish about 188 lines per 1 centimeter, which in modern technology approximately corresponds to the standard image scanning parameter of household scanners at 600 dpi . Consequently, from one square centimeter of the plane, without additional devices, the average person can count 188: 188 points, which will be equivalent:
For a monochrome image:
Vm = K * I = 188 x 188 x 1 bit = 35 344 bit / 8 bit = 4418 bytes / 1024 = 4.31 kbytes
For photo-quality images:
Vc = K * I = 188 x 188 x 24 bits = 848 256 bits / 8 bits = 106,032 bytes / 1024 = 103.55 kbytes
For greater clarity, on the basis of the calculations, we can easily determine how much information carries such a familiar leaflet of a format as A4 with dimensions of 29.7 / 21 cm:
VA4 = L1 x L2 x Vm = 29.7 cm x 21 cm x 4.31 kbytes = 2688.15 / 1024 = 2.62 MB - monochrome image
VА4 = L1 x L2 x Vm = 29.7 cm x 21 cm x 103.55 KB = 64584.14 / 1024 = 63.07 MB - color picture
Writing
If the "picture" is more or less clear with fine art, then writing is not so simple. The obvious differences in the way information is transmitted between the text and the picture dictate a different approach in determining the information content of these forms. Unlike an image, writing is a type of standardized, encoded data transmission. Without knowing the code of words embedded in the letter and the letters that form them, the informative load, say of the Sumerian cuneiform writing, is equal to zero for most of us, while ancient images on the ruins of the same Babylon will be correctly perceived even by a person who is completely ignorant of the intricacies of the ancient world . It becomes quite obvious that the information content of the text extremely depends on whose hands it fell into, on the interpretation of it by a specific person.

Nevertheless, even under such circumstances, somewhat eroding the justice of our approach, we can quite unambiguously calculate the amount of information that was placed in the texts on various kinds of flat surfaces.
Having resorted to the binary coding system already familiar to us and the standard byte, the written text, which can be imagined as a set of letters forming words and sentences, is very easy to digitize 1 / 0. The
usual 8-bit byte for us can take up to 256 different digital combinations, which should be enough for a digital description of any existing alphabet, as well as numbers and punctuation marks. Otsyudova begs the conclusion that any plotted standard character of an alphabetical letter on the surface takes 1 byte in digital equivalent.
The situation is a little different with the hieroglyphs, which have also been widely used for several thousand years. Replacing the whole word with one character, this encoding obviously uses the plane assigned to it from the point of view of the information load much more efficiently than this happens in languages based on the alphabet. At the same time, the number of unique characters, each of which needs to be assigned a non-repeated combination of combinations 1 and 0, is many times greater. In the most common existing hieroglyphic languages: Chinese and Japanese, according to statistics, no more than 50,000 unique characters are actually used, in Japanese and even less, at the moment, the country's Ministry of Education, for everyday use, has identified a total of 1850 hieroglyphs. In any case, 256 combinations fit in one byte can not do here. One byte is good

The current practice of using letters tells us that on a standard A4 sheet you can place about 1800 readable, unique characters. After simple arithmetic calculations it is possible to establish how much in digital terms one standard typewritten leaflet of alphabetical and more informative hieroglyphic letters will carry information:
V = n * I = 1800 * 1 byte = 1800/1024 = 1.76 kbytes or 2.89 bytes / cm2
V = n * I = 1800 * 2 bytes = 3600/1024 = 3.52 kbytes or 5.78 bytes / cm2
Industrial leap
The XIX century was a turning point, both for the methods of recording and storage of analog data, this was the result of the emergence of revolutionary materials and methods of recording information that were to change the IT world. One of the main innovations was the sound recording technology.

The invention of the phonograph by Thomas Edison first created the existence of cylinders with grooves deposited on them, and soon the first prototypes of optical discs.
Responding to sound vibrations, the phonograph cutter tirelessly made grooves on the surface of both metal and polymer later. Depending on the trapped vibration, the cutter applied a twisted groove of different depth and width to the material, which in turn made it possible to record sound and reproduce it purely mechanically, once engraved sound vibrations.
At the presentation of the first phonograph by T. Edison at the Paris Academy of Sciences, there was an embarrassment, one not a young, linguistic scientist, having barely heard a reproduction of human speech with a mechanical device, broke loose and the indignant rushed with his fists at the inventor, accusing him of fraud. According to this respected member of the academy, metal would never be able to repeat the melodies of the human voice, and Edison himself is an ordinary ventriloquist. But we all know that this is certainly not the case. Moreover, in the twentieth century, people learned to store sound recordings in digital format, and now we will plunge into some numbers, after which it will become quite clear how much information fits on a regular vinyl record (the material has become the most characteristic and mass representative of this technology) record.

In the same way as before with the image, here we will build on human abilities to capture information. It is widely known that most often the human ear is able to perceive sound vibrations from 20 to 20,000 Hertz, on the basis of this constant, a value of 44100 Hertz was adopted to switch to the digital sound format, since for the correct transition, the sampling frequency of the sound vibration should be two times its original value. Also, an important factor here is the coding depth of each of the 44100 oscillations. This parameter directly affects the number of bits inherent in one wave, the greater the position of the sound wave recorded in a particular second of time, the more bits it must be encoded and the more digitized sound will sound. The ratio of sound parameters, The one chosen for the most widespread format today, not distorted by the compression used on audio discs, is its 16 bit depth, with a resolution of 44.1 kHz. Although there are more “capacious” ratios of the given parameters, up to 32bit / 192 kHz, which could be more comparable with the actual sound quality of the recording grams, we will include the ratio of 16 bits / 44.1 kHz in the calculations. It was the chosen ratio in the 80-90s of the twentieth century that dealt a crushing blow to the analog audio recording industry, becoming in fact a full-fledged alternative to it. which could be more comparable with the actual sound quality of the recording grams, but we will include the ratio of 16 bits / 44.1 kHz in the calculations. It was the chosen ratio in the 80-90s of the twentieth century that dealt a crushing blow to the analog audio recording industry, becoming in fact a full-fledged alternative to it. which could be more comparable with the actual sound quality of the recording grams, but we will include the ratio of 16 bits / 44.1 kHz in the calculations. It was the chosen ratio in the 80-90s of the twentieth century that dealt a crushing blow to the analog audio recording industry, becoming in fact a full-fledged alternative to it.
And so, taking the announced parameters as the initial sound parameters, we can calculate the digital equivalent of the amount of analog information that the recording technology carries:
V = f * I = 44100 Hertz * 16 bits = 705600 bps / 8 = 8820 bytes / s / 1024 = 86.13 kB / s
By calculation, we received the necessary amount of information to encode 1 second of sound quality recording. Since the sizes of the plates varied, just like the density of the grooves on its surface, the amount of information on specific representatives of such a carrier also differed significantly. The maximum time for high-quality recording on a vinyl record with a diameter of 30 cm was less than 30 minutes on one side, which was on the verge of the possibilities of the material, but usually this value did not exceed 20-22 minutes. Having this characteristic, it follows that the vinyl surface could accommodate:
Vv = V * t = 86.13 kbytes / sec * 60 sec * 30 = 155034 kbytes / 1024 = 151.40 MB
and in fact it was placed no more:
Vvf = 86.13 kbytes / sec * 60 sec * 22 = 113691.6 kb / 1024 = 111.03 mb
The total area of such a plate was:
S = π * r ^ 2 = 3.14 * 15 cm * 15 cm = 706.50 cm2
In fact, 160.93 kbytes of information per square centimeter of the plate, of course, the proportion for different diameters will not vary linearly, since here it is taken not the effective recording area, but the entire medium.
Magnetic tape
The last and perhaps the most effective carrier of data applied and read by analog methods was magnetic tape. The tape is actually the only medium that has quite successfully survived the analog era.

The technology of recording information by magnetization was patented at the end of the nineteenth century by the Danish physicist Voldemar Poultsen, but unfortunately, then it did not become widespread. For the first time, technology on an industrial scale was used only in 1935 by German engineers, the first film tape recorder was created on its basis. Over 80 years of its active use, magnetic tape has undergone significant changes. Different materials, different geometric parameters of the tape itself were used, but all these improvements were based on a single principle, developed by Poultsen in 1898, of magnetic recording of vibrations.
One of the most widely used formats was a tape consisting of a flexible base on which one of the metal oxides was applied (iron, chromium, cobalt). The width of the tape used in household audio tape recorders was usually one inch (2.54 cm), the thickness of the tape began from 10 microns, as for the length of the tape, it varied significantly in different skeins and most often ranged from hundreds of meters to a thousand. For example, a reel with a diameter of 30 cm could fit about 1000 m of tape.
The sound quality depended on many parameters, both the tape itself and the equipment reading it, but in general, with the right combination of these same parameters, it was possible to make high-quality studio recordings on magnetic tape. Higher sound quality was achieved by using more tape to record a unit of time of sound. Naturally, the more tape is used to record the moment of sound, the wider the spectrum of frequencies managed to be transferred to the carrier. For studio, high-quality materials, the speed of recording onto tape was at least 38.1 cm / sec. When listening to records in everyday life, for a sufficiently complete sound, a recording made at a speed of 19 cm / sec was enough. As a result, up to 45 minutes of studio sound, or up to 90 minutes acceptable for the bulk of consumers, could fit on a 1000 m reel, content. In cases of technical recordings or speeches for which the width of the frequency range during playback did not play a special role, with a tape consumption of 1.19 cm / sec on the aforementioned reel, it was possible to record sounds as much as 24 hours.
Having a general idea of the technology of recording on magnetic tape in the second half of the twentieth century, it is possible to more or less correctly convert the capacity of bobbin carriers into units of measurement of data volume that we understand, as we already did for gramophone recording.
A square centimeter of such a medium will contain:
Vo = V / (S * n) = 86.13 kB / s / (2.54 cm * 1 cm * 19) = 1.78 KB / cm2
Total coil volume with 1000 meters of film:
Vh = V * t = 86.13 kB / s * 60 s * 90 = 465 102 kB / 1024 = 454.20 MB
Do not forget that the specific footage of the tape in the reel was very different, it depended, first of all, on the diameter of the reel and the thickness of the tape. Quite widespread, as a result of acceptable dimensions, bobbins were widely used, containing 500 ... 750 meters of film, which for an ordinary music lover was the equivalent of an hour-long sound, which was quite enough to keep track of an average music album.

The life of video cassettes, which used the same principle of recording an analog signal on magnetic tape, was rather short, but no less bright. By the time this technology was used in industry, the recording density on magnetic tape had dramatically increased. 180 minutes of video material with a very dubious quality, as of today, fit on a half-inch film 259.4 meters long. The first video formats gave a picture at the level of 352x288 lines, the best samples showed the result at the level of 352x576 lines. In terms of bitrate, the most advanced recording playback methods made it possible to approach the value of 3060 kbit / s, with a speed of reading information from the tape at 2.339 cm / s. A standard three-hour cassette could fit around 1724.74 MB, which is generally not so bad,
Magic number
The appearance and widespread adoption of numbers (binary coding) is entirely due to the twentieth century. Although the philosophy of coding with binary code 1/0, Yes / No, somehow hovered among mankind at different times and on different continents, sometimes gaining the most amazing forms, it finally materialized in 1937. A student at the University of Massachusetts Technological University, Claude Shannon, based on the work of the great British (Irish) mathematician George Boole, applied the principles of Boolean algebra to electrical circuits, which in fact was the starting point for cybernetics in the form in which we know it now.

In less than a hundred years, both the hardware and software components of digital technology have undergone a huge number of major changes. The same is true for the media. Starting from super inefficient - paper digital media, we have come to super efficient - solid state storage. In general, the second half of the last century passed under the banner of experiments and the search for new forms of media, which can be succinctly called a universal mess of the format.
Card
Punch cards have become perhaps the first step on the path of computer-human interaction. Such communication lasted quite a long time, sometimes even now this medium can be found in specific research institutes scattered across the CIS.

One of the most common punch card formats was the IBM format introduced back in 1928. This format has become basic for Soviet industry. According to GOST, the dimensions of such a punch card were 18.74 x 8.25 cm. It could accommodate no more than 80 bytes on a punch card, only 0.52 bytes per 1 cm2. In this calculation, for example, 1 gigabyte of data would be equal to approximately 861.52 hectares of punched cards, and the weight of one such gigabyte was a little less than 22 tons.
Magnetic tapes
In 1951, the first samples of data carriers based on the technology of pulsed magnetization of the tape specifically for registering “numbers” onto it were released. This technology made it possible to add up to 50 characters per centimeter of a half-inch metal tape. In the future, the technology was seriously improved, allowing many times to increase the number of unit values per unit area, as well as reduce the cost of the material of the carrier itself.

At the moment, according to the latest statements of Sony Corporation, their nano-development allows you to place on 1 cm2 the amount of information is 23 Gigabytes. Such ratios of numbers suggest that this magnetic tape recording technology has not outlived itself and has rather bright prospects for further operation.
Gram record
Probably the most amazing method of storing digital data, but only at first glance. The idea of recording an existing program on a thin vinyl layer arose in 1976 at Processor Technology, which was based in Kansas City, USA. The essence of the idea was to reduce the cost of the information carrier as much as possible. Employees of the company took an audio tape with recorded data in the existing Kansas City Standard sound format and transferred it to vinyl. In addition to reducing the cost of the medium, this solution made it possible to file the engraved record with a regular magazine, which allowed the small distribution of programs.

In May 1977, magazine subscribers, for the first time, received a record in their issue, which housed the 4K BASIC interpreter for the Motorola 6800 processor. The record lasted for 6 minutes.
This technology, for obvious reasons, did not take root, officially, the last disc, so called Floppy-Rom, was released in September 1978, this was its fifth release.
Winchesters
The first hard drive was introduced by IBM in 1956, the IBM 350 came with the company's first mass computer. The total weight of such a “hard disk” was 971 kg. In size, it was akin to a closet. It housed 50 disks, the diameter of which was 61 cm. The total amount of information that could fit on this "hard drive" was a modest 3.5 megabytes.

The technology of data recording itself was, so to speak, derived from gramophone recording and magnetic tapes. The disks placed inside the case kept a lot of magnetic pulses on themselves, which were introduced onto them and read out by the movable head of the recorder. Like a gramophone top, at every moment of time, the recorder moved across the area of each of the disks, gaining access to the necessary cell, which carried a magnetic vector of a certain direction.

At the moment, the aforementioned technology is also alive and, moreover, is actively developing. Less than a year ago, Western Digital released the world's first 10 TB hard drive. In the middle of the case there were 7 plates, and instead of air, helium was pumped into the middle of it.
Optical discs
They owe their appearance to the partnership of two corporations Sony and Philips. The optical disc was presented in 1982 as a suitable, digital alternative to analog audio media. With a diameter of 12 cm, the first samples could accommodate up to 650 MB, which with a sound quality of 16 bit / 44.1 kHz, amounted to 74 minutes of sound, and this value was not chosen in vain. Exactly 74 minutes lasts Beethoven’s 9th symphony, which was fondly loved by one of the co-owners of Sony, one of the developers from Philips, and now it could completely fit on one disc.
The technology of the process of applying and reading information is very simple. On the mirror surface of the disk, recesses are burned out, which, when reading information, optically, are unambiguously recorded as 1/0.

Optical storage technology is also booming in our 2015 year. The technology known to us as a Blu-ray disc with four-layer recording holds on its surface about 111.7 Gigabytes of data, at its not too high price, being ideal carriers for very “capacious” high-resolution films with deep color reproduction.
Solid State Drives, Flash Memory, SD Cards
All this is the brainchild of one technology. Developed back in the 1950s, the principle of recording data based on the registration of an electric charge in an isolated region of a semiconductor structure. For a long time he did not find his practical implementation to create a full-fledged information carrier on its basis. The main reason for this was the large dimensions of the transistors, which at the maximum possible concentration could not give rise to a competitive product on the data carrier market. They remembered the technology and periodically tried to introduce it throughout the 70s and 80s.

Indeed, the finest hour for SSDs has come from the late 80s, when semiconductor sizes began to reach acceptable sizes. Japanese Toshiba in 1989 presented a completely new type of memory "Flash", from the word "Flash". This word itself very well symbolized the main pros and cons of carriers implemented on the principles of this technology. The previously unprecedented speed of data access, a rather limited number of rewriting cycles and the need for an internal power supply for some of these media.
To this day, media manufacturers have achieved the largest concentration of memory due to the SDCX card standard. With dimensions of 24 x 32 x 2.1 mm, they can support up to 2 TB of data.
Cutting edge scientific progress
All the carriers that we dealt with up to this point were from a world of non-living nature, but let's not forget that the very first store of information that we all dealt with is the human brain.

The principles of the functioning of the nervous system in general terms are already clear today. And no matter how surprising it may sound, the physical principles of the brain are quite comparable with the principles of organizing modern computers.
A neuron is a structurally functional unit of the nervous system, it forms our brain. A microscopic cell, of a very complex structure, which is actually an analog of the transistor familiar to us. The interaction between neurons occurs due to various signals that propagate with the help of ions, which in turn generate electric charges, thus creating an unusual circuit.
But even more interesting is the principle of operation of a neuron, like its silicon counterpart, this structure fluctuates on the binary position of its state. For example, in microprocessors, the difference in voltage levels is taken as conditional 1/0, the neuron, in turn, has a potential difference, in fact, it can acquire one or two possible polarity values at any time: either “+” or “-”. A significant difference between a neuron and a transistor is in the boundary speed of the first one to acquire opposite values 1 / 0. As a result of its structural organization, which we will not go into too much detail, it is thousands of times more inert from its silicon counterpart, which naturally affects its speed - the number of processing requests per unit of time.

But not everything is so sad for living things, unlike a computer where processes are carried out in sequential mode, billions of neurons eaten into the brain solve tasks in parallel, which gives a number of advantages. Millions of these low-frequency processors quite successfully make it possible, in particular for humans, to interact with the environment.

Having studied the structure of the human brain, the scientific community came to the conclusion - in fact, the brain is an integral structure, which already includes a computing processor, instant memory, and long-term memory. Due to the very neural structure of the brain, there are no clear, physical boundaries between these hardware components, only blurred zones of specification. This statement is confirmed by dozens of precedents from life when, due to certain circumstances, people were removed part of the brain, up to half of the total volume. Patients after such interventions, except that they did not turn into a “vegetable,” in some cases, over time, restored all their functions and happily lived to a very old age, thereby being a living proof of the depth of flexibility and perfection of our brain.

Returning to the topic of the article, we can come to an interesting conclusion: the structure of the human brain is actually similar to the solid-state storage of information, which was discussed just above. After such a comparison, remembering all its simplifications, we may wonder how much data can be placed in this repository in this case? It may be surprising again, but we can get a very definite answer, let’s make the calculation.
As a result of scientific experiments carried out in 2009 by a neurobiologist, a doctor at the Brazilian University in Rio De Janeiro - Susanne Herculano-Hausel, it was found that in the average human brain, weighing about one and a half kilograms, about 86 billion neurons can be counted, I recall, previously scientists believed that this figure for the average value equals 100 billion neurons. Based on these numbers and equating each individual neuron to actually one bit, we get:
V = 86,000,000,000 bits / (1024 * 1024 * 1024) = 80.09 gigabytes / 8 = 10.01 gigabytes
Is it a lot or a little, and how much competitor can this information storage medium be? It’s very difficult to say so far. Every year the scientific community pleases us more with progress in studying the nervous system of living organisms. You can even find references to the artificial introduction of information into the memory of mammals. But by and large, the secrets of brain thinking are still a mystery to us.

Total
Although far from all types of data carriers were presented in the article, of which there are a great many, the most characteristic representatives found a place in it. Summing up the material presented, we can clearly trace the pattern - the entire history of the development of data carriers is based on the heredity of the stages preceding the current moment. The progress of the past 25 years in the field of storage media is based on the experience gained from at least the last 100 ... 150 years, while the growth rate of storage capacity over the past quarter century increases exponentially, which is a unique case throughout the history of mankind that we know.
Despite the seemingly archaic nature of analog data recording, up to the end of the 20th century it was a completely competitive method of working with information. An album with high-quality images could contain gigabytes of the digital equivalent of data, which until the early 1990s was simply physically impossible to place on an equally compact medium, not to mention the lack of acceptable methods of working with such data arrays.
The first sprouts of recording on optical discs and the rapid development of HDDs of the late 1980s, in just one decade, broke the competition of many formats of analog recordings. Although the first musical optical discs did not differ qualitatively from the same vinyl records, having 74 minutes of recording versus 50-60 (two-way recording), the compactness, versatility and further development of the digital direction are expected, finally buried the analog format for mass use.
A new era of information carriers, on the threshold of which we are standing, can significantly affect the world in which we will be in 10 ... 20 years. Already, advanced work in bioengineering gives us the opportunity to superficially understand the principles of operation of neural networks, to manage certain processes in them. Although the potential for placing data on structures similar to the human brain is not so great, there are things that should not be forgotten. The functioning of the nervous system itself is still rather mysterious, as a result of its little knowledge. The principles of placing and storing data in it, at a first approximation, it is obvious that they act according to a slightly different law than this will be true for the analog and digital method of processing information. As with the transition from the analogue stage of human development to the digital one, in the transition to the era of the development of biological materials, the two previous stages will serve as the foundation, a kind of catalyst for the next leap. The need for activization in the bioengineering direction was obvious earlier, but only now the technological level of human civilization has risen to the level where such work can really succeed. Whether this new stage in the development of IT technologies will swallow the previous stage, as we already had the honor, is to observe, or will it go in parallel, it is too early to predict, but the fact that it will radically change our life is obvious. when such work can really succeed. Whether this new stage in the development of IT technologies will swallow the previous stage, as we already had the honor, is to observe, or will it go in parallel, it is too early to predict, but the fact that it will radically change our life is obvious. when such work can really succeed. Whether this new stage in the development of IT technologies will swallow the previous stage, as we already had the honor, is to observe, or will it go in parallel, it is too early to predict, but the fact that it will radically change our life is obvious.
