The future is for AI?
This topic is an abundance of my thoughts on artificial intelligence, inspired by articles in the latest Esquire magazine. It states that only a few decades will pass, and the world will see a “real” AI, which finally can already pass the ill-fated Turing test , and then completely surpass the human mind.
So, I think that this is the most common science fiction nonsense (even if written in such a cool magazine). We are not able to create AI that will be equivalent to human intelligence. Why?
First, trying to create AI, even just trying to analyze what intelligence is, we try (in a slightly simplified way) to understand the brain by the brain. And what happens when in the process of cognition an object and a subject coincide? The resulting knowledge becomes subjective, so we cannot speak here about obtaining true knowledge as such. In our case, this means that we cannot even understand how our brain works (or rather, how electrical impulses turn into emotions, feelings, thoughts).
Secondly, imagine that we have built a brain with all its neurons (well, or a computer model of the brain, is immaterial). What's next? How to understand that the impulses that we send to any departments, and then propagating further, have created intelligence? Or, more correctly, did not create?
Thirdly, as we already know, a computer is not a priori capable of creating something distinctively new, or, in other words, is not capable of engaging in creative activities. But how can intelligence exist without creativity?
Of course, you can simply take the basic functions of intelligence (recognition, systematization, storage, etc.) and try to imitate them. But the point? Is the sum of the components equal to the final result? And it’s quite possible that we still don’t know many functions, but simply use them.
In short, leave the computer what it knows how to do perfectly - calculate and store information, and discard all this nonsense about the capture of humanity by machines and the creation of super-intelligence that can enslave the world. This, alas, will not happen.
UPDThanks to everyone for the feedback, an extremely interesting discussion is unfolding :)Figs with them, these minuses :)))
So, I think that this is the most common science fiction nonsense (even if written in such a cool magazine). We are not able to create AI that will be equivalent to human intelligence. Why?
First, trying to create AI, even just trying to analyze what intelligence is, we try (in a slightly simplified way) to understand the brain by the brain. And what happens when in the process of cognition an object and a subject coincide? The resulting knowledge becomes subjective, so we cannot speak here about obtaining true knowledge as such. In our case, this means that we cannot even understand how our brain works (or rather, how electrical impulses turn into emotions, feelings, thoughts).
Secondly, imagine that we have built a brain with all its neurons (well, or a computer model of the brain, is immaterial). What's next? How to understand that the impulses that we send to any departments, and then propagating further, have created intelligence? Or, more correctly, did not create?
Thirdly, as we already know, a computer is not a priori capable of creating something distinctively new, or, in other words, is not capable of engaging in creative activities. But how can intelligence exist without creativity?
Of course, you can simply take the basic functions of intelligence (recognition, systematization, storage, etc.) and try to imitate them. But the point? Is the sum of the components equal to the final result? And it’s quite possible that we still don’t know many functions, but simply use them.
In short, leave the computer what it knows how to do perfectly - calculate and store information, and discard all this nonsense about the capture of humanity by machines and the creation of super-intelligence that can enslave the world. This, alas, will not happen.
UPDThanks to everyone for the feedback, an extremely interesting discussion is unfolding :)