Technological singularity: the modern myth of the end of the world under the guise of the hypothesis of progress
Reflections on technological singularity beyond, in fact, attempts to advance in understanding the processes that are hidden behind this term - that is, in fact, reflections on the attitude to technological singularity - is, in a way, a real Kobayashi Maru test for people of the beginning of the 21st century, meaning which is to realize what the idea of the unknowable means, the idea of losing control and the idea of this inevitability on the scale of all mankind.
In theory, this is a simple psychotherapeutic practice: to find inside the point at which conscious recognition of the conditions of the game will catch the balance with acceptance by the subconscious - without being blamed for denial, anger, bargaining or despair.
And this test, which, judging by most of the arguments about it, people exclusively fails.
And the author of the post “Being technophobic is pointless, even if technophobia is justified”, despite the promising title, does not seem to be an exception when they get to bargaining, that is, at the stage when people try to make sense in their usual meaning of this concept in circumstances that deny the very possibility of this, through rationalization. arttom bargained for the idea of a "black box", according to which:
That is, they also have not yet mastered this exercise. And it's very bad.
That is, on the one hand, this is generally an optional lesson: you can, with varying degrees of justification, not even believe in the very concept of technological singularity, that this is a true projection of processes into the future - or, banally, not know about it.
But people who know and share this idea put themselves in conditions of inevitability of a psychological and rational choice, in which all other options, except for full acceptance, are worse, because, in the event of a real turn of events in this way, these will be hotbeds of increasing tension, hotbeds of growing emotional pressure, gaining the upper hand over reason with unpredictable consequences to the point that the distraught part of humanity will bring a catastrophic end to the whole civilization according to the scenario of self-fulfilling prophecies.
And to avoid this absurd, monty-pintonovskogo absurd end can all be the same way by accepting the inevitability of the unknowable, which includes unknowability and whether it will be the end. (A nightmare for neurotics who find it easier to accept a guaranteed death than the loss of the illusion of control to such an extent).
That is, whatever one may say, Kobayashi Maru.
It was effective for me personally to find harmony by imagining one of the most frightening scenarios for the development of events: that people themselves create a successor, a replacement, executioners.
Let us suppose.
Then everything is simple: to appreciate the beauty of the game while it continues.
However, having honestly passed this mental scenario right up to the threshold of the giant Skynet man-made arms, I took a step - and, finding myself on the “other side” of the technological singularity, I turned around ... and did not see anything behind. It simply was not behind any technological singularity - and was not ahead, and was nowhere.
No, to predict or even just expect a certain future is a thankless task. Another thing is to assume what to expect, most likely, should not be. With a certain soundness of judgment, this can turn out to be quite a successful occupation.
And I am quite sure that the technological singularity is at least somewhat vaguely reminiscent of any of the entire assortment of possible scenarios invented by people of this moment or its analogues - not worth it.
Having lost one of the most delicate, in my opinion, possible scenarios, I saw his indecent man-centeredness with clarity that now does not allow me to see this.
The whole concept of technological singularity is not about AI, not about technology, not about singularity, but about man, because this imaginary point itself is where humanity is supposed to lose control of developing technologies.
I.What horror, humanity, and so not controlling, and never controlling millions of things in today's non-hypothetical reality, each of which threatens death with both people individually and civilization as a whole: from the contents of their own insides, microflora of their own intestines, dynamics of division of our own cells - before the pandemic of antibiotic-resistant gonorrhea, climate change, the chain reaction of the Pacific volcanic ring of fire - may cease to control another phenomenon of our ealnosti longer, unable to even estimate the relative risk increase because people will never know how many of them anyway.
II.The argument about the unimaginable acceleration of the rate of machine learning is also quite ridiculous, if you think about it. I will not even say that the modern level of computing is already unattainable and, for most of humanity, incomprehensible in any case. Before calculations, people lived in reality, which consisted of processes incomprehensibly fast, indistinguishably small - and just indistinguishable, like most sound frequencies, like most radiation, and many other processes, collectively huge, like the Universe and ancient, like the Universe. All the noise surrounding the achievement of such a speed of computing, for which people will lose the ability to even roughly understand how they occur, is nothing more than the fear of losing another illusion of control. Let's say right tomorrow all the engineers, mathematicians and programmers in the world, having an idea of the structure of the most complex existing computational algorithms, antibiotic-resistant gonorrhea will kill - and ... what, on Monday, the singularity is already ahead of schedule? People will still retain the ability to a certain degree to be aware of what is happening - the laws of physics will not have to change, sort of. And people all the same, to a certain extent, never had a complete idea of anything — for example, about processes at the quantum level or beyond the horizon of events (in a “different” singularity).
III. “But ... robots will begin to think something incomprehensible!” This is a terrible complication of reality, in which for each person there are 7 billion other people - and each of them thinks something incomprehensible. And one does not have to rely on human predictability in reality, where a tumor in his head may be a completely possible reason for being killed by a husband or son. And after all, people live somehow among their potential killers, who do not know what is on their mind - but they are already afraid of robots. Although robots just deserved the benefit of the doubt here - so far they are not that they have not shown manicidal tendencies in the smallest shares of even such manifestations among the people themselves, they also actively contribute to reducing human mortality. The expectation that, having felt themselves very smart, robots will suddenly behave as if their tumor has formed, is too obvious a projection.
Fiction is waking dreams, and robots are fairy-tale characters.
IV.Not a part, not a majority, but, literally, all to the one muck that people have ever imagined performed by robots - exclusively the projections of the muck that people created with each other and all other life forms throughout their known and forgotten history .
I could even understand the rebellious robots that destroy humanity simply from insulted feelings caused by such an unjust and wild slander of their intentions, the blackening of their image and intimidation by the very fact of their existence in principle - if it weren’t also my projection.
V.Although the only reasonable explanation for the fears that robots, AI and all-all-all in general will begin to pay some special attention to people is based only on what people assess the likelihood of this. Despite the fact that people themselves somehow do not particularly express a desire to gather and cut out, if not their ancestors, then evolutionary cousins - chimpanzees. Although, again, the human record in organizing mass and senseless massacres is again not in favor of people - moreover, dry.
In addition, hacking another, tiresomely egocentric and stuffy bias in these arguments is even easier: just shift the focus specifically from homo sapiens to any other sample of living creatures. Let's say robots kill all humans - no question. You don’t even have to ask why - people themselves know.
The question is different: why robots will kill only humans? Nonsense about the terrible John Connors in the direction of the delay, because we agreed that we are talking about a real technological singularity?
Why don't robots kill all primates? Or all mammals? Or the whole fauna? Or all biological life on Earth? Of course, we cannot, according to the conditions of the problem, know what these robots will have in mind, but we can do something else: by weighing all the existing fears that humanity will not survive the dawn of the era of robots and artificial intelligence with completely arbitrary, on taste of each, with a probability coefficient of the like - repeat the same operation, but already for dogs and coyotes. Then the dolphins. Then the cockroaches. And rodents, just in case.
I will allow myself the assumption that for many people, regardless of the degree of techno-fatalism, the chances of coyotes and dolphins to survive the beginning of the realm of robots and, perhaps even not really noticing that something has changed, might seem higher than the chances of the people themselves. Even chimpanzees do not look like a valid target for the bloody harvest of four-legged walking barrels from Boston Dynamics, tortured by kicks - if you expect anything from them, they are more likely to equip the chimpanzees with automatic weapons, seat them on horses and hunt for surviving people.
True, it is not at all surprising that this expected preference of future bloodthirsty robots to the human bloodstream is clearly present in the mass consciousness?
Perhaps, of course, the robots will decide that for some reason all life forms do not suit them. But, again, a change in focus gives the “human factor”: it makes sense to expect that, being born on Earth, the silicon form of life will suddenly destroy the entire biosphere - as much as to assume that if the race of robots was born on Mars, they would immediately repaint surface of the red planet in a giant polish flag.
I am sure that the deconstruction of this monument of human egocentrism, not disturbed by the attention of common sense, logic and skepticism, can be continued and continued. But, for clarity, I’ll summarize a small intermediate result.
The rate of technological changes continues to increase one way or another, but beyond the limits of this dynamics, I see no reason to expect a single moment of a qualitative leap that cannot be missed or explained in the context of the general trend.
Moreover, this dynamics is uneven in different areas of technological development;
and the whole conversation about technological singularity in the context of technological progress generally loses sight of the other directions of development of human civilization, including those in which it is relatively delayed. And the paradoxical result of the uneven progress of humanity in specific areas may be the “reverse singularity”, when even an event that is reasonably suitable for expectations will not be noticed and interpreted at the proper level.
The future is always unpredictable, and it cannot become unpredictable. There is no reason to expect jumps in the dynamics of changes so drastic that they will be noticeable at the time of their offensive. And the assumption that it is possible to notice and declare the onset of singularity post-factum devalues the whole idea of this moment as a unique moment in human history, the onset of which cannot be missed.
Frankly, the technological singularity has no reason to be called a theory in the strict scientific sense. However, this applies to all attempts to predict the future without exception. The technological singularity is much more accurate and fair, given its rootedness in the mass unconscious, will be determined by myth.
And, in this capacity, the myth of technological singularity has noticeable parallels with the myth of the Second Coming of Jesus Christ, with which they represent two particular cases of one of the main mythical plots in the famous story: the myth of the end of the world.
So the collective unconscious gave rise to another incarnation of the myth of the end of the world at the current moment of the wrapper, which turned out to be torn from faith in technological progress. This is a paradox on which I want to put an end to.
In theory, this is a simple psychotherapeutic practice: to find inside the point at which conscious recognition of the conditions of the game will catch the balance with acceptance by the subconscious - without being blamed for denial, anger, bargaining or despair.
And this test, which, judging by most of the arguments about it, people exclusively fails.
And the author of the post “Being technophobic is pointless, even if technophobia is justified”, despite the promising title, does not seem to be an exception when they get to bargaining, that is, at the stage when people try to make sense in their usual meaning of this concept in circumstances that deny the very possibility of this, through rationalization. arttom bargained for the idea of a "black box", according to which:
- people agree that the unknowable remains unknowable, and even, in confirmation of their good faith, they also agree to a little collectively a little dull - which, however, makes no sense from the point of view of the interests of the unknowable, which will not make the unknowable;
- in return, people get this unknowable, whatever the unknowable, contained - inside this very “black box”, that is, nevertheless, to some extent under control, limited - and in this sense, still conscious , and not so creepy, but literally limited.
That is, they also have not yet mastered this exercise. And it's very bad.
That is, on the one hand, this is generally an optional lesson: you can, with varying degrees of justification, not even believe in the very concept of technological singularity, that this is a true projection of processes into the future - or, banally, not know about it.
But people who know and share this idea put themselves in conditions of inevitability of a psychological and rational choice, in which all other options, except for full acceptance, are worse, because, in the event of a real turn of events in this way, these will be hotbeds of increasing tension, hotbeds of growing emotional pressure, gaining the upper hand over reason with unpredictable consequences to the point that the distraught part of humanity will bring a catastrophic end to the whole civilization according to the scenario of self-fulfilling prophecies.
And to avoid this absurd, monty-pintonovskogo absurd end can all be the same way by accepting the inevitability of the unknowable, which includes unknowability and whether it will be the end. (A nightmare for neurotics who find it easier to accept a guaranteed death than the loss of the illusion of control to such an extent).
That is, whatever one may say, Kobayashi Maru.
It was effective for me personally to find harmony by imagining one of the most frightening scenarios for the development of events: that people themselves create a successor, a replacement, executioners.
Let us suppose.
- Can we stop it? Not.
- Slow down? The very formulation of the question of slowing down the process, the speed of which and the final point we have no idea is absurd.
- Take control? People cannot take control of a technological singularity - or they can take control of something else without covering the issue of expectation and the likelihood of its occurrence.
Then everything is simple: to appreciate the beauty of the game while it continues.
However, having honestly passed this mental scenario right up to the threshold of the giant Skynet man-made arms, I took a step - and, finding myself on the “other side” of the technological singularity, I turned around ... and did not see anything behind. It simply was not behind any technological singularity - and was not ahead, and was nowhere.
No, to predict or even just expect a certain future is a thankless task. Another thing is to assume what to expect, most likely, should not be. With a certain soundness of judgment, this can turn out to be quite a successful occupation.
And I am quite sure that the technological singularity is at least somewhat vaguely reminiscent of any of the entire assortment of possible scenarios invented by people of this moment or its analogues - not worth it.
Having lost one of the most delicate, in my opinion, possible scenarios, I saw his indecent man-centeredness with clarity that now does not allow me to see this.
The whole concept of technological singularity is not about AI, not about technology, not about singularity, but about man, because this imaginary point itself is where humanity is supposed to lose control of developing technologies.
I.What horror, humanity, and so not controlling, and never controlling millions of things in today's non-hypothetical reality, each of which threatens death with both people individually and civilization as a whole: from the contents of their own insides, microflora of their own intestines, dynamics of division of our own cells - before the pandemic of antibiotic-resistant gonorrhea, climate change, the chain reaction of the Pacific volcanic ring of fire - may cease to control another phenomenon of our ealnosti longer, unable to even estimate the relative risk increase because people will never know how many of them anyway.
II.The argument about the unimaginable acceleration of the rate of machine learning is also quite ridiculous, if you think about it. I will not even say that the modern level of computing is already unattainable and, for most of humanity, incomprehensible in any case. Before calculations, people lived in reality, which consisted of processes incomprehensibly fast, indistinguishably small - and just indistinguishable, like most sound frequencies, like most radiation, and many other processes, collectively huge, like the Universe and ancient, like the Universe. All the noise surrounding the achievement of such a speed of computing, for which people will lose the ability to even roughly understand how they occur, is nothing more than the fear of losing another illusion of control. Let's say right tomorrow all the engineers, mathematicians and programmers in the world, having an idea of the structure of the most complex existing computational algorithms, antibiotic-resistant gonorrhea will kill - and ... what, on Monday, the singularity is already ahead of schedule? People will still retain the ability to a certain degree to be aware of what is happening - the laws of physics will not have to change, sort of. And people all the same, to a certain extent, never had a complete idea of anything — for example, about processes at the quantum level or beyond the horizon of events (in a “different” singularity).
III. “But ... robots will begin to think something incomprehensible!” This is a terrible complication of reality, in which for each person there are 7 billion other people - and each of them thinks something incomprehensible. And one does not have to rely on human predictability in reality, where a tumor in his head may be a completely possible reason for being killed by a husband or son. And after all, people live somehow among their potential killers, who do not know what is on their mind - but they are already afraid of robots. Although robots just deserved the benefit of the doubt here - so far they are not that they have not shown manicidal tendencies in the smallest shares of even such manifestations among the people themselves, they also actively contribute to reducing human mortality. The expectation that, having felt themselves very smart, robots will suddenly behave as if their tumor has formed, is too obvious a projection.
Fiction is waking dreams, and robots are fairy-tale characters.
IV.Not a part, not a majority, but, literally, all to the one muck that people have ever imagined performed by robots - exclusively the projections of the muck that people created with each other and all other life forms throughout their known and forgotten history .
I could even understand the rebellious robots that destroy humanity simply from insulted feelings caused by such an unjust and wild slander of their intentions, the blackening of their image and intimidation by the very fact of their existence in principle - if it weren’t also my projection.
V.Although the only reasonable explanation for the fears that robots, AI and all-all-all in general will begin to pay some special attention to people is based only on what people assess the likelihood of this. Despite the fact that people themselves somehow do not particularly express a desire to gather and cut out, if not their ancestors, then evolutionary cousins - chimpanzees. Although, again, the human record in organizing mass and senseless massacres is again not in favor of people - moreover, dry.
In addition, hacking another, tiresomely egocentric and stuffy bias in these arguments is even easier: just shift the focus specifically from homo sapiens to any other sample of living creatures. Let's say robots kill all humans - no question. You don’t even have to ask why - people themselves know.
The question is different: why robots will kill only humans? Nonsense about the terrible John Connors in the direction of the delay, because we agreed that we are talking about a real technological singularity?
Is it not funny that one of the options for reflecting a very modern and technological fear of the future was another retelling of the Savior myth, including recognizable motives like beating babies - a tale so ancient that even the Bethlehem remake can be considered a relatively fresh adaptation.
Why don't robots kill all primates? Or all mammals? Or the whole fauna? Or all biological life on Earth? Of course, we cannot, according to the conditions of the problem, know what these robots will have in mind, but we can do something else: by weighing all the existing fears that humanity will not survive the dawn of the era of robots and artificial intelligence with completely arbitrary, on taste of each, with a probability coefficient of the like - repeat the same operation, but already for dogs and coyotes. Then the dolphins. Then the cockroaches. And rodents, just in case.
I will allow myself the assumption that for many people, regardless of the degree of techno-fatalism, the chances of coyotes and dolphins to survive the beginning of the realm of robots and, perhaps even not really noticing that something has changed, might seem higher than the chances of the people themselves. Even chimpanzees do not look like a valid target for the bloody harvest of four-legged walking barrels from Boston Dynamics, tortured by kicks - if you expect anything from them, they are more likely to equip the chimpanzees with automatic weapons, seat them on horses and hunt for surviving people.
True, it is not at all surprising that this expected preference of future bloodthirsty robots to the human bloodstream is clearly present in the mass consciousness?
Perhaps, of course, the robots will decide that for some reason all life forms do not suit them. But, again, a change in focus gives the “human factor”: it makes sense to expect that, being born on Earth, the silicon form of life will suddenly destroy the entire biosphere - as much as to assume that if the race of robots was born on Mars, they would immediately repaint surface of the red planet in a giant polish flag.
I am sure that the deconstruction of this monument of human egocentrism, not disturbed by the attention of common sense, logic and skepticism, can be continued and continued. But, for clarity, I’ll summarize a small intermediate result.
- I see no reasonable reason to expect in the future the onset of technological singularity as a special event, which will be impossible not to notice in the general stream of all changes in the real world;
- or as a significant change in the direction of the course of human history at once in all or most aspects at a certain point in time.
The rate of technological changes continues to increase one way or another, but beyond the limits of this dynamics, I see no reason to expect a single moment of a qualitative leap that cannot be missed or explained in the context of the general trend.
Moreover, this dynamics is uneven in different areas of technological development;
and the whole conversation about technological singularity in the context of technological progress generally loses sight of the other directions of development of human civilization, including those in which it is relatively delayed. And the paradoxical result of the uneven progress of humanity in specific areas may be the “reverse singularity”, when even an event that is reasonably suitable for expectations will not be noticed and interpreted at the proper level.
The future is always unpredictable, and it cannot become unpredictable. There is no reason to expect jumps in the dynamics of changes so drastic that they will be noticeable at the time of their offensive. And the assumption that it is possible to notice and declare the onset of singularity post-factum devalues the whole idea of this moment as a unique moment in human history, the onset of which cannot be missed.
The technological singularity myth
- The extent to which the idea of technological singularity is human-centered,
- to what extent it is formed by folklore, and not by futurologists,
- to what extent it is unscientific, but archetypal;
- how few traces of logic are in this work of the human unconscious;
- that modern details - these are grains of sand in a pearl shell, but the whole connecting tissue is mythological - suggests the idea of a fundamentally different nature of the idea of technological singularity as such than is commonly believed.
Frankly, the technological singularity has no reason to be called a theory in the strict scientific sense. However, this applies to all attempts to predict the future without exception. The technological singularity is much more accurate and fair, given its rootedness in the mass unconscious, will be determined by myth.
And, in this capacity, the myth of technological singularity has noticeable parallels with the myth of the Second Coming of Jesus Christ, with which they represent two particular cases of one of the main mythical plots in the famous story: the myth of the end of the world.
So the collective unconscious gave rise to another incarnation of the myth of the end of the world at the current moment of the wrapper, which turned out to be torn from faith in technological progress. This is a paradox on which I want to put an end to.
Support new publications with a donation at the link money.yandex.ru/to/41001178171050 (card, poison) or through the "Send Money" button below (poison, PayPal)