Being technophobic is pointless, even if technophobia is justified



    Several of Kurt Vonnegut’s novels have the fictional planet of Tralfamador. Its inhabitants live in four dimensions, and see at once all the time from beginning to end. They know how the universe began, and they know how it will die: the Tralfamadore test scientists will launch a super-engine, it will explode and destroy everything. But they are not even trying to prevent a catastrophe. There is no hint in their thinking that the course of events needs to be changed. They continue to make progress towards this engine, because in their world everything that happens has already happened.

    Sometimes it seems to me, we think the same way, only a little less consciously. The idea that progress cannot be stopped gives us unshakable optimism. If you don’t stop, then everything is going as it is, and there are continuous endless successes ahead. We must relax and row with the flow. Even disturbing scenarios respond in a corner of consciousness with romantic delight. “Will the cars become smart and kill everyone?” Class! Like in a movie! ”This is treated with serious pessimism - almost insanity.

    It is clear that reality is always more boring than fiction, and alarmists and Luddites usually die before progress justifies their fears. But according to some futurologists, even in our lifetime we can become witnesses to large points of no return, which we are happy to rush at full speed.



    Recently, fillpackart wrote a column on abstraction levels in modern programming languages. Allegedly, between the code and the execution on the hardware there are so many automated layers of interpretation and compilation that we begin to lose control over low-level things.

    We discussed this topic a lot, and, of course, we don’t think that all programmers should abandon familiar languages ​​together and switch to a hammer with a soldering iron. Abstraction is great. Modern capabilities, ease of solving problems and the level of comfort - everything is just fine. Probably, being a developer is cool as never before.

    But the world is a thing full of paradoxes. Thinking about what will happen to the industry in ten years seems important. But after 60–70–170 - somehow it’s not so much anymore. It is as if it does not concern us. Even if we survive, by that time atmospheric pressure and calls from grandchildren will become more important to us. But at least for a second, imagine what will happen if you constantly increase the thickness of abstractions and increase the comfort of programming. If you automate, and automate, and automate.

    I think in this case the descendants of our descendants will inherit the “black box” - a technology that works on its own, understandable only to it and to us, long-dead creators (who left some kind of old-fashioned unreadable documentation), in a way. Technology will not need to know, it just needs to be serviced. And I’m sure that on this the IT business will build its culture, and each manifesto will read: “Do not get to the core of the technology, just let it solve problems.”

    Just imagine what the conversations would be like then, in the year 2199. Probably something in the spirit: “According to our observations, if we say the system the word“ fulfill ”, it will turn out 0.8 seconds faster than if we say“ do ”. And if you hold a three-second pause at the end of the keynote speech and then add “asparagus”, the conversion will increase by 3%. ”

    And if you have a better opinion of humanity - look at the people who put a link on Facebook in the first comment. Today we are starting “rationally justified” dances with a tambourine by the fire, if only the popular algorithm only closes the wall with us a little bit.





    The really important question is not “Will technology turn into a black box”, but “When will this happen”. And there are people much smarter than me who believe that magic, instead of science, will begin earlier than it seemed to me.

    Last summer, I listened to a speech by Leonid Tkachenko from MTS at a big data conference. At first he talked about the death of a telecom, then he amused the people with predictions:
    There is a theory about technological singularity. Human intelligence is growing quite slowly, while artificial intelligence is growing fast. We are progressing in a blunt way, mating accidentally when outwardly someone is liked, and human evolution is slow. We are the same as we were one and a half million years ago. The brain progresses for a long time because everything happens randomly.

    When we talk about AI, we train it purposefully to make it better and better. He doesn’t mate with anyone by accident; he does what we want. And according to some estimates, artificial intelligence will catch up with the human in 2030–40.

    This means that technological progress will go further. He will not stop. But AI will move it. We will not be able to understand what he came up with, and he will continue to make technological breakthroughs.

    And there is a separate ethical question - will it work against us? How much can we control him? After all, he will be stronger. Whether he will work for our good, or we will turn into cows.

    Cows live on Earth, no one exterminates them. But cows do not even understand that people still live on Earth who use them as they want. Yes, cows are not erased from the face of the Earth, but the owners of life now, of course, we are with you. At some point, someone else may become the owner, this can happen. And this moment is near.

    Leonid, when he said this, relied on the predictions of Vernon Vinge and Raymond Kurzweil. Vinge is a writer, and although the term “technological singularity” belongs to him, his forecast is too optimistic (pessimistic?). Allegedly, we will lose control over technological progress by 2030. Kurzweil is an engineer, and counted a little more time - by 2045.

    Both of their forecasts were based on Moore’s law on a constant increase in the frequency of processors. “It’s all about hardware and software,” said Kurzweil in 2006.
    In my book, The singularity is near, I wrote that we need to achieve 10 quadrillion (10 16 ) operations per second in order to provide the functional equivalent of all areas of the brain — even less by some estimates. Some supercomputers are already at around 100 trillion (10 14 ), and reached 10 16 by the end of this decade.

    Supercomputers with one quadrillion operations are already in the project, and two Japanese manufacturers plan to reach 10 16 in a few years. By 2020, computers with 10 quadrillion operations will cost a thousand dollars.

    The fact that iron reaches such capacities was controversial when I wrote the first book in 1999, and now this is a very popular opinion. Disputes now lead more around algorithms.

    But now, in 2019, one can see how Kurzweil's forecasts, if they do not come true, are at least moving away. Firstly, the algorithms and ways to pump iron went in slightly different ways. At about the same time that Kurzweil wrote this, manufacturers began to switch to multi-core architectures, which somewhat complicated Mura's law. And the most powerful supercomputer today, which began to be designed in the same 2006th, copes with only two hundred trillion operations per second. There is still a long way to complete brain reconstruction, and even further to a cost of a thousand dollars.

    Kurzweil's other predictions seem completely fantastic. For example, the first attempt to promote VR failed, although Kurzweil was counting on a full-fledged move to the virtual world a couple of years ago. Nanorobots implanted in the head, which will greatly increase our abilities, also exist only in films and books.

    On the other hand - now almost every technological failure is answered "just the time has not come yet". Optimism, pessimism and a sense of inevitability cannot be swept out of your head by any logic, as if everything predicted has already happened in Tralfamador, we just can’t make out the exact date.





    If the question “when” is decided only by expectation, then the choice “to be optimistic or pessimistic” must be made by ourselves.

    Kurzweil does not seem to have a single drop of technophobia. He is a rare optimist and to this day believes that the singularity is approaching, and this is not a problem, but a challenge, our indulgence, to finally begin to augment organic brains with something that has pieces of iron and microprocessors. And beyond the horizon of singularity, it’s not a loss of control over technology, not a scenario when AI wakes up and kills us all — but a revolution, as a result of which a person will finally cease to be a purely biological being.

    The host of the Origins show on National Geographic, Jason Silva, believes that this has already happened, we just did not notice, because it did not happen on a sci-fi scale, but as always happens in reality - gradually and mundane:
    Kurzweil and Kevin Kelly say that we will continue to augment our thinking, giving more and more ways to process information to non-biological carriers. AI will not rebel against us, we will make our own intelligence more and more artificial.

    But we are already unloading part of the consciousness on artificial carriers. When we write something on paper, part of the thinking happens just on this paper. Part of the thinking is that we move the handle. Part of the thinking happens when you look at your own thoughts, unloaded on paper and react to what you wrote yourself.

    The artificial environment is already part of the thinking apparatus. Philosophers David Chalmers and Andy Clark expressed the thesis about “expanded consciousness”, according to which, a smartphone is already an addition to thinking, and that thinking is not limited to the brain, that consciousness actually exists in the feedback loop between the brain, tools and the environment. That is why we say that our thoughts sculpt the environment, and the environment affects the thoughts. Everything that we create creates us too. There is no "we are against them." There is one large distributed intelligence, which consists of biological and non-biological parts.

    Therefore, I think we have nothing to fear. It’s just billions of small steps that expand our ability to create.

    Their optimistic attitude is based on the fact that nothing new will happen, and there will be nothing to lose control over. That, developing technology, we pump our own consciousness, and these things are inextricable. If you look closely, this makes sense. Most of all, for example, I am struck by our inner sense of time, completely built on tools.

    Having invented and written out “hours on paper” hours, minutes and seconds, we began to measure everything with them at first glance subconsciously. You can immerse yourself in any business, and finish it without thinking to understand: "I spent three hours." You can get carried away with the film so that you forget everything, but on the credits you clearly feel that two hours have passed. But if you take away all the surroundings, all the tools and all the work, the awareness of time will begin to slip away from consciousness. A person, having spent a week in a dark cave, will be mistaken in assessing the elapsed time for several days. Even sitting in an empty room it is impossible to assess whether an hour has passed or two. It turns out that the sensation of time does not live inside of consciousness, but outside.

    And we live very well with the fact that the knowledge of mankind has long been stored not in the brain, but on external media. That the whole civilization, infrastructure, communication are built on things unloaded from memory. This is normal because the carriers seem even more reliable and understandable than our own brain.

    But modern technology is built on a funny paradox. People do not trust their consciousness and try to consolidate everything beyond it and at the same time consider the human consciousness to be the most reliable controller. But if a person is so unpredictable, and the mechanisms are so clear and reliable, why, for example, do military directives prohibit the development of AI systems without human control?

    Irrational pessimism in the stream of optimism?

    Control, the cornerstone of the paradox of modern progress, is the last thing we are ready to give to the external environment, although we ourselves do not trust it.





    I am a pessimist in life, but I know that it is better to look at everything with optimism. The more accessible the technology, the more everything is automated - the easier it is to live. Problems will click like nuts. I am optimistic about technology, like flying on an airplane. To fly in comfort, you must believe that you will land. It’s easy, because the chances are very high. But there is always the slightest chance to break. And if faith in him wins optimism, the flight will turn into a hell of a hell.

    I believe that both optimism and pessimism around the influence of progress on us grows from the same place. I gave an example with an airplane because I am an aerophobe. I tried to sort out my fear and realized that it was just the fear of lack of control that lay beneath it.

    I hated the primitiveness of my brain when the problem was solved by a simple psychological trick. The application helped, which monitors the state of the aircraft and explains on the go all its sounds, inclinations and so on and so forth. Having received the illusion of control in the form of knowledge, I got rid of anxiety, rustling over all my rationality and logic. After all, from the fact that I know something, the chance to break did not decrease by a percent, but it became psychologically easier for me.

    The fear of losing control and not having information is irrational. People are generally divided into two types. Those who, as a child, slept facing the door to see the monster that would go into it - and their backs to the door, so that in no case would he be seen. The monster will eat you up in both cases. The only question is the amount of information received before the inevitable death, so that it is not so scary. Although, what's the difference in a global sense?

    If I wanted to create an AI that should take control from a person, I would do everything to give the user the illusion of openness, understanding and control.

    And it seems to me that the thinking of modern Luddites and technophobes is based precisely on this irrational fear - not knowing enough. Optimists are probably calm about the same. They are sure that they will always know and control exactly as much as is necessary in order to benefit and live comfortably.

    Roughly speaking, if programming turns into a normal human conversation with a voice system, and it works really effectively, then why not? The pessimist will say, "how do we know if the system understood everything well enough." The optimist will keep silent, show the problem that has been successfully solved and throw the microphone.





    The fear of a lack of knowledge lies in the understanding that knowledge exists, but cannot be recognized. After all, not knowing what you do not know is not so scary.

    In 1984, Thomas Pinchon wrote the essay “ Is It Normal to Be a Luddite ” and even then said that the source of ignorance is too much knowledge.
    In the modern world, anyone who has the time, competence and money to pay for access can reach any specialized knowledge that he or she may need. The problem, in fact, is how to take the time to read something outside of our own specialization.

    In the text, Pinchon understands how the fear of technology has changed over time. After the strikes of workers (hence the name “Luddites”), who smashed looms for taking their work away, technophobia was best recorded in the writers' books. Indeed, writers, unlike scientists, think that ignorance is unavoidable, and there are a lot of things around that they will never understand. Scientists at every moment know exactly what will allow them to learn even more in the future. That is, the disclosure of absolutely all knowledge is a matter of time. If it’s very rude, the writers think that they have nothing to control, scientists think that they control as much as it takes to take control even more.

    In those days when pogroms and the industrial revolution were a hot topic, Mary Shelley wrote the novel Frankenstein, or modern Prometheus. Speaking at a stretch - this is the first science fiction, and immediately about how the scientist was killed by his creation (sorry for the spoiler).

    Since then, in science fiction books about technology, pessimism and optimism have constantly changed, because flying into space is cool, and nuclear bombing is bad - but all these are the results of modern progress. Over time, the technophobia of writers turned from horror into something more and more strange. Either from the realization that they were wrong, and you don’t need to be afraid of technology. Or vice versa - because everything is already lost, the points of no return are passed, and pessimism turns into a feeling of hopelessness.
    Will mainframes attract as much hostile attention as looms? I strongly doubt it. Writers of all sorts flee in panic to acquire word processors. Cars have already become so user-friendly that even the most ill-adapted of the Luddites can be so enchanted by them that they can put off the old sledgehammer and knock on the keys instead.

    With the correct allocation of budget and computer time, we can cure cancer, save humanity from nuclear destruction, grow food [which is enough] for everyone, neutralize the effects of environmental pollution by the peddling industry - in short, realize all the sad pipe dreams of our time.

    If our world survives, the next big challenge will follow, which is worth paying attention to: you will hear it for the first time when all the curves of research and development in the field of artificial intelligence, molecular biology and robotics come together. Just think about it! It will be surprising and unpredictable, and even the biggest bosses, which we sincerely hope, will be caught by surprise. This, of course, will be something that all exemplary Luddites will look forward to, if, God willing, we will live to see this moment.

    That is, the fear of technophobes has turned not just into a sense of hopelessness, but a gloating expectation that scientists will be left with a nose. When the greed for knowledge and self-confidence will stand sideways.





    The pessimism that I lost now seems to be the lot of madmen who do not understand anything (if you are a technophobe, go live in the forest and don’t bother). He turned into the opinion "we want the best, but we will do as always." But it is customary to think about points of no return in the future tense. To think that they have already been completed, especially at a time when it’s not like computers - not even written language - is absolutely wild and pointless.

    Such an unpopular opinion was expressed by the anthropologist Yuval Noah Harari in the book Sapiens: A Brief History of Humanity. According to him, we came up with our Tralfamador superengine ten thousand years ago when we made the agrarian revolution, and since then our evolution has gone in the opposite direction. Allegedly, the man-gatherer possessed the most developed brain in history, lived much more comfortable and happier.

    But then he planted wheat and became dependent on it.

    The agrarian revolution was by no means the beginning of a new, easy life - the ancient farmers lived much more difficult, and sometimes even more hungry, than the collectors. Hunters and gatherers led a healthier lifestyle, did not work so hard, found themselves more diverse and enjoyable activities, less likely to suffer from hunger and disease. Thanks to the agrarian revolution, the total amount of food consumed by mankind has certainly increased, but more food is not necessarily a more wholesome diet. No, as a result of the demographic explosion and the elite arose, the average cattle breeder or farmer worked more and ate worse than the average hunter or gatherer. The agrarian revolution is the greatest scam in history.

    The pursuit of easy life has led people to a standstill. People are not able to predict the consequences of a decision in its entirety. Each time they seem to subscribe to a minor complication of work - say, not just scatter seeds, but also hoe the ground beforehand. They said to themselves: “Yes, I have to work. But what harvest we gather! Do not have to worry about the future crop. Our children will never starve again. Let’s heal! ”

    A number of simple decisions were made with a simple immediate goal - to fill the stomachs, to provide some kind of safety, but in the aggregate these decisions forced the ancient hunter-gatherers to drag countless vessels of water under the scorching sun and to water this damned wheat.

    But according to Harari, the ancient people who planted her did not trap themselves. The first farmers had everything fine, they really simplified their lives and brought comfort that their ancestors had sought for centuries. This became a trap for their great-great-great-great-grandchildren, who had already lost the quality of their ancestors to live and choose the best. They were left with only agriculture, and they were gradually captured by all the shortcomings of the new way of life.

    The theory is speculative, tense and full of speculation. And perhaps the creation of a super automated system that will solve all the problems of our descendants in the distant future will be considered harmful by the same injection. But we will create it anyway. Because progress, productivity and constant growth is a new religion and the meaning of our atheistic time. And if now we are getting better and better, then why worry in vain? We still believe that a little more, and everything will be perfect. You just have to work a little more.

    But if you drown optimism for a second, and put in mind the solutivistic "technology solves the problems that they themselves create" - what will happen? Nothing but, perhaps, the realization that you are not going on your own, but that you are being dragged behind a speeding train.

    A little thought experiment. Imagine that you saw the future, and you know that in a hundred years the technology will get out of control that the super-engine will blow up the universe when tested (or at least turn descendants into underdeveloped idiots). Will you change something? Stop doing technology development?

    Or here are a couple more details. If you had a really huge impact on progress, would you stop it in this case? If so, would you not be embarrassed by the thought that your choice was already recorded in the timeline? What influence on the future does not exist? That knowledge does not really give control - it is simply knowledge for the sake of knowledge, that globally it will give only passive contemplation of a monster entering a room.

    When I try to answer myself, I only feel constrained. This is how to find out about the crash of an airplane in which you are already flying. The best thing you can do is not to scream in a panic, but to order vodka and smoke, although it is forbidden.

    Most of our lives are humility with an inexplicable existence. Everything will be easier if you just make an effort and consciously decide to enjoy this far from endless attraction. After all, always thinking well about something, you come to the same thing: it’s better not to think about it.

    Also popular now: