About barriers to the use of sign systems in artificial intelligence

Why do we need sign systems

The sign system acts as a carrier of thoughts, ideas, emotions, experiences, sensations, memory organization - products of mental processes, proceeding, according to the ideas of modern science, in the brain of man and higher animals. A sign system is a means of pointing to such products. It seems that at the moment the only way to convey information about the results of thinking, memory, emotions, sensations, imagination is to encode this information using a sign system. We cannot (for now?) Directly exchange thoughts, emotions, sensations, without resorting to this or that sign system. We need sign systems to share the results of such processes. Sign systems are a satellite of the indicated processes, and, possibly, there is feedback,

Apparently, a thought can never be precisely and unequivocally expressed only by means of a sign system, i.e coding is an approximation, a certain model. It is always possible to clarify something indicated by a sign. No wonder there is an expression “pick words” - an attempt to express a thought with the help of signs. Absolutely accurate and unambiguous expression of thought through words, most likely, does not exist. A scientist for the expression of scientific thoughts, ideas, writes not one word or a sentence, but a whole series of articles, each of which describes closer and more accurately what he wanted to describe, express in his work. The answer to the question whether the sign of thought is identical, emotions, rather negative.

For example, consider the sensation of red. If communication agents know what red color is, they have corresponding sensory experience, they can use any suitable sign system to transmit information about this sensory experience: say the word “red” or draw a red circle on a piece of paper and show this sign to each other. If there is no such experience, then it is impossible to convey information about the "redness" - it is impossible to tell the red color to someone who does not know what red is. You can try to explain that the red color is electromagnetic oscillations with a wavelength of about 700 nanometers, but the feeling of “redness” from such information will not appear, knowledge of what red color will still be inaccessible, because we do not exchange sensory experience directly - we exchange signs indicating, including, launching similar sensory experiences with other communication agents. Those. the sign is the “wrapper” of the product of the mental process, but not the product itself.

Understanding barrier

Hence the problem of the barrier of understanding. Modern AI technologies quite successfully solve issues related to the study of sign systems. The success of AI in recognizing images (signs) is obvious: written speech (OCR), oral speech (Alice, Siri), music (Shazam), images; in the simulation of natural languages: the allocation of parts of speech, sentence members and proper names, machine translation. At the same time, all of these are examples of studying and working with sign systems, nothing more. In general, in the study of sign systems, humanity is constantly evolving. The emergence and development of writing, science, culture, art, sports - all this is closely connected with the invention, use, study and development of sign systems.

Computational capabilities using operations with signs are also growing - starting with the Babylonian abacus, invented 3000 years BC, and Pascalina Blaise Pascal with the Leibniz Arithmometer in the 17th century to modern computer technology. And if progress is obvious in questions of studying and working with symbolic systems, then with the modeling of mental processes, insurmountable difficulties are still observed.

In humans (animals?), The sign is automatically linked to the product of thinking, emotion, etc., because It is evolutionary and is intended to encode the latter. The AI ​​does not have such products, therefore, the recognized characters are not tied to anything, they do not encode anything, but remain as if by themselves, remain just “bare” signs without indicating anything at all, i.e. do not bear any meaningful or feeling load, there is no understanding of the recognized sign. So, a person who has experienced fear knows what it is, and can try to communicate this, i.e. transfer and save information about your experience of sensory experience, using the possession of any sign system. For example, he may:

  • draw a picture on the wall of the cave;
  • write an essay in a natural language;
  • compose a poem;
  • create a piece of music;
  • put a smiley sign in the chat;
  • finally, do nothing at all (blank sign).

The AI ​​does not know what fear is, accordingly, it cannot match any sign at all to fear. And he is also unable to sense any recognized sign as fear. The autopilot running the AI, running into an obstacle on the road, will not understand anything, will not feel anything, will not think, will not be upset, will not be pleased, will not be frightened, there will be no reflection. Thus, for an AI, any sign refers to a non-existent product of non-existent mental processes.

Recognition barrier

This results in another barrier - the barrier to the quality of character recognition. Quality means fidelity, correctness, accuracy of recognition of a mark with respect to the standard. Quality is limited by a barrier of understanding. AI can not assess the quality of the recognized image, because There is no feedback from thinking and other mental processes in order to correct recognition errors. In the sentence “AMALA AMU”, a person who speaks Russian will most likely be able to restore the distorted signs-words.

For example, it is possible to restore such a meaning: “mother washed the frame,” since the sequence of these characters encodes a familiar comic saying that is encoded by a set of precisely such words: "mom washed the frame." A person was able to restore incorrectly recognized signs-words: “ama” -> “mother”, “ate” -> “soap”, “ama” -> “frame”. For AI, all three words in the sentence are absent in the dictionary of the Russian language. There is no speculation, “fit” for any clear, well-known meaning, because in the AI ​​memory there are actually no stored meanings at all. At the same time, if a person does not have familiar thoughts encoded by similar words, the words “amla amu” will not be associated with any thoughts, simply because they are not there, i.e. the processes of thinking and coding are interconnected and constantly work in conjunction. See that this is due to evolution - any sign encountered must be decoded, “screwed up”, “fitted to” any known meaning, i.e. there is an automatic need to understand the sign, to recognize it, to connect with something familiar and understandable. There is a need to understand the benefits and harm that is encoded by this sign. The man smiles at me - well, the dog barks at me - badly. If you do not know what is good and what is bad, neither a smile nor a barking make any sense. AI does not know what is good and what is bad for him. the dog barks at me - bad. If you do not know what is good and what is bad, neither a smile nor a barking make any sense. AI does not know what is good and what is bad for him. the dog barks at me - bad. If you do not know what is good and what is bad, neither a smile nor a barking make any sense. AI does not know what is good and what is bad for him.

When an unsuccessful attempt to compare a sign to the sense, a person may have other thoughts about this sentence, which he can express, for example, with these words: “I don’t understand what it is, I’d better go from here”, “what would it mean I wonder? "," what kind of rubbish is written here? "," Perhaps I meant "mom washed the frame," I'm not sure, but I’d better answer this question to the test, "etc. Those. in any case, some conclusion about the perceived signs will be made. A false track option is also possible. For example, if the source of the text is an electronic tool, where the rules of spelling are often neglected, the words “ama” and “ama” could be written in capital letters with the capital letter “Ama” and “Amu”, i.e. be your own names. Understanding such a set of signs would not have been at all connected with the saying “Mom washed the frame”. It is possible to dream up and offer such a variant of comprehension - “Ama was Amu” - a certain Ama, a female, was a representative of a certain Amu tribe. Hence the following thought may arise: Ama is a name from a foreign language, since My knowledge tells me that in Russian there is no own name, Ama, and I did not hear about the Amu tribe. Thus, the thought process spins on and on until a conclusion is made. By itself, the correctness or falsity of the conclusion is unimportant, the main thing is that the conclusion is made. The conclusion can be right and wrong, a person can be right or wrong in their assumptions. Those. incorrect, inaccurate, erroneous, false signs can lead to false conclusions. Based on this entertainment and intellectual games when, by incomplete or contradictory information, it is proposed to restore the original meaning. We have just played such a game above - what was the message in the message “ama am am”: “did mom wash the frame” or “was Amma Am”?

See also the spoiled phone game. Here, the recognition of images - the effect "seemed to have the dream." This way sound recognition is “heard”. A good example of auditory artifacts is word recognition in tonal languages, for example, in Chinese. The word “shi”, being pronounced in an ascending tone (denoted by shi2), means the number “ten”, and in a descending tone (denoted by shi4), the verb-link “to be”. The pronoun “I” in Chinese is pronounced “wo” with a descending ascending tone (denoted by wo3). Then the phrase “wo3 shi4 ...” for a person who speaks Chinese could signify the beginning of the sentence “I am ...”, “I am ...” or simply “I ...”. At the same time, the phrase “wo3 shi2 ...” sounds meaningless, because leads on the wrong track - the listener builds the thought “I am ten ...”, which is meaningless according to the grammar of Chinese. But as thinking tries to give meaning to signs, it will try to compare the most possible interpretation options “wo3 shi2 ...” - “I am ten ...” in order to get the expected result. For example, the interlocutor can either guess himself or ask again: “you probably wanted to say wo3 shi4 instead of wo3 shi2?”

Those. an attempt will be made to restore the distorted signs in order to obtain as a result of some adequate meaning from the listener's point of view. AI is incapable of this, since he has no point of view. The products of other senses have similar recognition artifacts.

Once again, with AI, signs can not be linked to anything, hence the barriers to understanding and the quality of character recognition. For an AI, an elephant in an ordinary city apartment, turned up there for some reason, due to the operation of the image recognition algorithm, is a normal situation. For a man - is absurd, because Common sense dictates that even if the elephant, let it be an elephant, could fit in the apartment, it is completely incomprehensible how he got there - the apartment door is too narrow for him to enter, the lift capacity does not correspond to the elephant’s mass and other real world limitations. Therefore, the elephant in the room in the picture is either an illustration to a fairy tale, or a photomontage, or a pattern recognition error in general, i.e. It seemed to me that the picture shows an elephant - the conclusion was made. AI has no such critical thinking, it cannot understand the meaning,

Learning sign systems is a dead end?

The study of sign systems does not allow to study the thinking and other mental processes occurring in the brain directly. Does this mean that studying sign systems is pointless? No, does not mean. Sign systems are a means of coding and transmitting our thoughts, so we have virtually nothing left to do but learn the first. There are simply no other ways in the humanities — linguistics, psychology, sociology, and related sciences.

The only work we can do to learn to understand our thoughts better is to study sign systems. The study of sign systems does not answer the questions of the structure and properties of thinking - these questions remain open for the time being. Signs are pointers to thoughts, perhaps even triggers of thoughts, but not the thoughts themselves, which means we only study pointers to entities, but not entities themselves. Studying sign systems, we study the properties of sign systems, but not the properties of those entities for which these sign systems are used to encode.

For example, studying the vocabulary and syntax of the Russian language, we study the sign system of the Russian language, but not the thoughts themselves, which are expressed using the Russian language. Admittedly, it is now even incomprehensible what the structure is, what are the properties of thoughts, not signs. The only obvious thing is that a thought can be expressed with a certain degree of accuracy, approximation using presumably any sign system. For example, a thought that is expressed in Russian by the words “sadness”, “sadly” can be expressed with the help of such sign systems:

  • Russian language: sentences “I am sad”, “they are sad”, etc .;
  • English language: the sentence “I feel sad”, “they look sad”, etc .;
  • a work of art where cold tones will prevail;
  • musical work, where minor tones will prevail;
  • “Sad” dance with smooth “sad” movements;
  • a photograph of the sad face of a person or animal;
  • Smiley :-( in the modern culture of electronic communication.

The accuracy of the transmission of this thought by the above sign systems is different. The ability of sign systems to transmit thoughts corresponds to the needs of the transfer of thoughts as well as is generally possible, taking into account the capabilities of a living organism: the senses, coloring, means of transmitting dances, pheromones, etc. That is why there are so many ambiguities, uncertainties and redundancies in natural languages ​​- all this serves only one purpose - to encode and convey thoughts, emotions, etc. as accurately as possible, even if this requires the introduction of redundancy and other non-optimal ones in terms of the amount of transmitted information and transmission time. People constantly clarify the information received, try to make sure that they understand the recognized signs correctly, to find the meaning in them: they re-ask the phrases, re-read the texts, listen to the music,

The guy read on the Internet that the girl straightens her hair - so she liked him. Stupid? Perhaps, but such is our nature - to match signs with meanings. To think, to seek solutions, to feel, to recognize signs and to compare them with meanings, to set goals - to live - mental processes do not stop throughout life. AI doesn’t deal with input and output information, since There is nothing to compare it with, there is nothing to tie it to.

The sign systems are imperfect, but they are the only thing that evolution has given us for the exchange of products of mental processes. Who knows, perhaps nature has already tried other methods of transmission, but they turned out to be unsuccessful, and in the end, it was the sign systems that took root.

Also popular now: