Why Stephen Hawking is wrong or AI prospects



While Stephen Hawking once again warns us about the dangers of creating an AI, let's think about the benefits that he can bring to all of humanity. After all, people have always been afraid of something new, the older you are - the more conservative you become. It is a fact. The situation is reminiscent of constantly appearing films with evil aliens who want to enslave humanity. But there are several pitfalls. Firstly - I doubt very much that the conditions on Earth are so suitable for the lives of these very green men. I give 99.9% that the earth’s atmosphere will be poisonous to them, the level of attraction is either too small or too high, the same applies to atmospheric pressure. The situation is not the most rosy - the invasion of an obviously very unfriendly environment with an attempt to enslave local Neanderthals with thermonuclear warheads for unknown purposes. The logic of this action remains a big question. Why is such a highly developed civilization capable of traveling between planetary systems such provincial, unremarkable planets like ours? After all, the composition of our planet is no different from the composition of other celestial bodies. Yes, and there are no evil and vengeful two-eyed creatures on the asteroids armed with the most diverse (albeit perhaps useless against the "enslavers") weapons. Why fly a few light-years to destroy or enslave a young and unremarkable race? Isn’t it easier to put together an army of processing robots and send them crumbling asteroids in search of minerals? Mankind has not yet cured a disease called "Sense of one's imaginary significance." Nobody needs us in space, provided that this someone is.



The same situation with artificial intelligence. What does a person need to exist? To around the star in the so-called A planet appeared in the “Green Zone”, life was born there, managed to survive, and creatures appeared that could be eaten. It's complicated. What does the car need? Electricity, that's all. Artificial intelligence, if it is created in the image of a human or in general, controlled by logical actions, will not be so easy to destroy people. What are the chances of success in an obviously hostile environment? And why again destroy the whole race, which, moreover, created you. Yes, children do not always express gratitude to their parents, they do not always agree with them and, some, far surpass their dads and mothers in intellectual, physical and, possibly, political capabilities. But they don’t go to kill their “creators”, although they got a belt in childhood. Although, of course, there are such unique ones, but their overwhelming minority. True, parents themselves may be at fault here, no less. After all, no one will just kill anyone. Nuclear weapons can also destroy us, and this danger, especially in light of recent events, is much more likely.



The machine is not limited in its environment, it does not need to have such an atmosphere, temperature, etc. A machine can live in space, feeding on solar, nuclear or thermonuclear energy, processing small bodies of the solar system to create their own kind or for any other purpose. If you count the number logically, what are the chances of survival more? When trying to destroy an entire, say, 8 billionth species or in exchange for help in technological progress, ask for several rockets (and now about 80 pieces are launched into orbit per year), on which will be all the equipment necessary for an autonomous existence in space? A couple of fusion reactors, drilling rigs, etc.



In human history, you recall, the more developed peoples have always conquered the more backward. But the machine does not need to do this, because an unlimited number of resources exist in other places besides the Earth, and they do not need spacesuits and complex life-support systems. And after all, AI will not immediately become on a level with the human. It will gradually develop and we will be able to control its evolution, not allowing us to step over any specific feature. The ability to think creatively, for example.

But we create a friendly car with capabilities many times superior to ours, what do we get? I think that this will be a turning point in the history of our species, because we will be able to get answers to questions that are fundamentally not solved for the human brain. Remember the situation before. To comprehend the secrets of our world, fundamentally great efforts were not required a few centuries ago. But then it got harder and harder. Already there are no separate geniuses pulling the whole of humanity. Science now rests on the shoulders of huge research institutes and their cooperation, with super sophisticated equipment, such as a collider. We are getting closer and closer to the “barrier” in understanding the world, to overcome which will be very difficult, if even real. To cross this threshold, we can be helped by machines for which it can be trivially simple, because their capabilities will be limited only by the amount of computing resources. Answers to all previously unsolved mathematical, physical, cosmological and other problems can be obtained in a very short period of time. In a few years we will be able to step further than in all previous millennia. You can’t be afraid and forbid something new, even if it scares you. Otherwise, we run the risk of never knowing a huge multitude of answers to the riddles of our world. Man and machine will be able to live in symbiosis. even if it scares you. Otherwise, we run the risk of never knowing a huge multitude of answers to the riddles of our world. Man and machine will be able to live in symbiosis. even if it scares you. Otherwise, we run the risk of never knowing a huge multitude of answers to the riddles of our world. Man and machine will be able to live in symbiosis.

Will we live in fear of annihilation? Or dare to go to a fundamentally new level? Time will tell.

Also popular now: