# Algorithm of thinking and consciousness

This article outlines the algorithm of thinking and consciousness. I offer my answer to the question of how thinking and consciousness work. And I demonstrate an algorithm that can truly, creatively, think and has a real consciousness. The article is designed for programmers and consists of two main parts. The first part is purely technical, it contains a description of the algorithm, a list of its properties and an example of practical application. The second part contains explanatory theses and solution of the issue of constructive axiomatization of consciousness. The algorithm is a meaningful text that speaks for itself, and therefore the comments will be only a practically necessary minimum.

## Algorithm Description

The description of the algorithm is made in a home-made mathematized formalism from the top-down principle, that is, the final, abstract record is first given, and then the algorithm is analyzed for parts in the order of making calls and comments are given. So, the “assembled” algorithm is a recursive function of the following form:

t n + 1 = composition [ abstraction [ deduction [t n ]]]; t 0 = s; t n , s ∈ S ; n ∈ N The

calculation of this function is thinking. As you can see, three operators appear in the record:
composition [] , abstraction [], deduction [] ; It is the same: the bare variable s ∈ S , a plurality of lines of a special type S and the step number n ∈ N . Next, consider in detail each part. Let's start with the set S and its elements.

In order to specify the set S, it is necessary to determine the syntax in which the elements of this set will be written. The elements of S will be called strings. Any string from sconsists of a hierarchy of brackets "(", ")", and arbitrary character identifiers are written inside brackets. In order to avoid the use of the term “identifier”, since it may be required for other purposes, symbolic identifiers inside brackets will be referred to as “mnemonics”. Each mnemonic is written in Latin characters “A - z”. Mnemonics inside brackets can be separated by a comma “,”. If the length of the mnemonics is fixed, which is negotiated separately, the separator is not set. Mnemonics are written only inside brackets. The string can have nested brackets. The hierarchy of brackets in the string is arbitrary, but there must be a closing one for each opening bracket. In this article, I will use only small letters of the Latin alphabet to write mnemonics, and the length of the mnemonics will be fixed, one letter corresponds to one mnemonic, I do not put a separator symbol. String examples:

() ≡ ∅ - empty string.
(a) - a string containing one mnemonic “ a ”.
(aa) is a string containing two instances of the “ a ” mnemonic .
(ab) - a string containing two mnemonics “ a ” and “ b ”.
((a) (a)) - the string contains two instances of the “ a ” mnemonic and nested levels of brackets.

Nested brackets, together with their contents, as well as individual mnemonics will sometimes be called “string components”, in those cases where an appropriate generalization is required. For example, the string ((a) ab) contains four components, among them: two components “ a ”, one component “(a) ”and one component“ b ”.

Records of lines that coincide up to permutation of components within a line are considered identical. Examples of identical strings:

(ab) (ba) .
((a) (b)) ≡ ((b) (a)) .
(abc) ≡ (bac) ≡ (cba) ≡ (acb) (bca) ≡ (cab) .
((a) (ab)) ≡ ((a) (ba)) ≡ ((ab) (a)) ((ba) (a)) .

Strings can contain any number of identical, repeating components, and in this case, the abbreviated notation is allowed using the repetition index, which is placed before the component on the left, without a separator. Examples:

(aa) ≡ (2a) .
(aabb) ≡ (2a2b) .
((a) (a)) ≡ (2 (a)) .
((aa) (aa)) ≡ (2 (2a)).
(aa (bb) (bb) (ccc) (ccc) (ccc)) ≡ (2a2 (2b) 3 (3c)) .

In cases where the string contains empty components, for example, (a ()) , (a () () (b)) the following identities take place: (a ()) (a) , (a () () (b )) (A (b)) , that is, empty components are discarded.

Definition . The set S consists of all possible strings that satisfy the above syntactic criteria, including the empty string.

Operators deduction of abstraction and composition are defined on the set S . Operator arguments are in square brackets []because parentheses are reserved for string syntax. The term “operator” is synonymous with the term “function”.

Deduction operator . Definition ∈s ∈ S , deduction k [s] ∈ S , k ∈ N , k> 1, deduction [s] deduction 2 [s] . It takes the string s from S as an argument . As a result, returns a string from s . Act. K operatoronce each component of a line and the entire line is duplicated. The resulting design is framed by common outer brackets. Duplication begins with the deepest, in terms of nesting, component. The entire line is duplicated last. For the closest practical purposes, it is enough that k = 2 , so I defined a special case of deduction [s] deduction 2 [s] . The use of deduction [] implies that k = 2 , that is, as a result of the deduction [s] operator, all components of the string s are doubled. Examples:

deduction [(a)] = ((aa) (aa)).
deduction[(aa)] = ((aaaa) (aaaa))
deduction [(ab)] = ((aabb) (aabb)).
deduction [(a (b))] = ((aa (bb) (bb)) (aa (bb) (bb))).
deduction [((a) (b))] = (((aa) (aa) (bb) (bb)) ((aa) (aa) (bb) (bb))).
deduction [((a) (b (cc)))] = (((aa) (aa) (bb (cccc) (cccc)) (bb (cccc) (cccc))) ((aa) (aa) ( bb (cccc) (cccc)) (bb (cccc) (cccc)))) .

Abstraction operator . Definition ∈ ∀s S , abstraction [s] ⊂ S . As an argument, takes the string s from S. As a result, returns multiple rows. Operating principle. From the source line, the abstraction operator creates a set of lines using a special operation — putting the same components out of the brackets. The bracketing operation applies only to nested brackets that are at the same nesting level. The general principle of making brackets. If in any combination of brackets located at the same level, there are identical components inside the brackets, then any set of identical components can be put out of brackets, and components that remain intact should be combined under one common brackets of the same level. Consider an example. The string ((ab) (ac)) . In this line at the same level there are two substrings: (ab) and (ac), inside which there is the same mnemonic " a ", this mnemonic can be put out of the brackets and the result will be (a (bc)) . As you can see, the remaining mnemonics " b " and " c " are united in common brackets. Consider a less obvious example. The string ((aa) (aa)) contains the substrings (aa) and (aa) , in this case there are two different options for brackets. In the first variant, only one " a " mnemonic can be bracketed from each substring , and in the second variant, the " aa " mnemonic group can be taken out . Consider both options in more detail.

The first option, a step-by-step demonstration:

1. Step one, choose ( red ) what to endure (( a a) ( a a)) .
2. Step two, we carry out the selected ( a (... a) (... a)) .
3. Step three, combine the residuals in common brackets ( a (... a ... a)) .
4. Result (a (aa)) .

The second option, in steps:

1. Step one, choose what to endure (( aa ) ( aa )) .
2. Step two, submit the selected ( aa (...) (...)) .
3. Step three, combine the residuals in common brackets ( aa (...)) .
4. Step four, discard empty components ( aa ) .
5. Result (aa) .

Let's complicate an example. Let the string be given ((aa) (aab) (aab)) , it has three substrings located on the same level: (aa) , (aab) , (aab) , all three have the same content. The rule of making brackets does not oblige us to carry out the operation at once for all three substrings. For a pull operation, you can select any group of substrings.

In this case, there are three non-identical options for grouping substrings:

1. (aa), (aab) .
2. (aab), (aab) .
3. (aa), (aab), (aab) .

We will carry out all possible derivations for each of the grouping options, step by step.

Grouping (aa) , (aab) . The string ((aa) (aab) (aab)) .

First option:

1. Select the content (( a a) ( a ab) (aab)) .
2. We take out ( a (... a) (... ab) (aab)) .
3. We unite ( a (... a ... ab) (aab)) .
4. Result # 1 (a (aab) (aab)) .

The second option:

1. Select the content (( aa ) ( aa b) (aab)) .
2. We take out ( aa (...) (... b) (aab)) .
3. We unite ( aa (... b) (aab)) .
4. Result # 2 (a (b) (aab)) .

Grouping (aab) , (aab) . The string ((aa) (aab) (aab)) .

First option:

1. Select the content ((aa) ( a ab) ( a ab)) .
2. We take out ((aa) a (... ab) (... ab)) .
3. We unite ((aa) a (... ab ... ab)) .
4. Result # 3 (a (aa) (aabb)) .

The second option:

1. Select the content ((aa) ( aa b) ( aa b)) .
2. We take out ((aa) aa (... b) (... b)) .
3. We unite ((aa) aa (... b ... b)) .
4. Result # 4 (aa (aa) (bb)) .

The third option:

1. Select the content ((aa) (a ab ) (a ab )) .
2. We take out ((aa) ab (... a) (... a)) .
3. We unite ((aa) ab (... a ... a)) .
4. Result # 5 (ab (aa) (aa)) .

Fourth option:

1. Select the content ((aa) (aa b ) (aa b )) .
2. We take out ((aa) b (... aa) (... aa)) .
3. We unite ((aa) b (... aa ... aa)) .
4. Result # 6 (b (aa) (aaaa)) .

Fifth option:

1. Select the content ((aa) ( aab ) ( aab )) .
2. We take out ((aa) aab (...) (...)) .
3. We unite ((aa) aab (...)) .
4. Result number 7 (aab (aa)) .

Grouping (aa) , (aab) , (aab) . The string ((aa) (aab) (aab)) .

First option:

1. Select the content (( a a) ( a ab) ( a ab)) .
2. We take out ( a (... a) (... ab) (... ab)) .
3. We unite ( a (... a ... ab ... ab)) .
4. Result # 8 (a (aaabb)) .

The second option:

1. Select the content (( aa ) ( aa b) ( aa b)) .
2. We take out ( aa (...) (... b) (... b)) .
3. We unite ( aa (... b ... b)) .
4. Result # 9 (aa (bb)) .

Action abstraction operator . As you can see from the example, for the source line ((aa) (aab) (aab)) there are nine different options to put something out of the brackets, and nine resulting lines correspond to these options. This is the way the abstraction operator acts - it goes through all the possible options for putting it out of the brackets and builds the corresponding set of resulting lines. Moreover, the abstraction operator is looking for options for rendering not only in the source line, but also in all the resulting result lines. In other words, the abstraction operator is recursively applied to its results, and so on until all possible options have been exhausted. For obvious reasons, for any finite string, the number of possible rendering options is also finite.

Let's return to the previous example. In the above example, I wrote out not all possible options, but only nine pieces of the first level. In order to illustrate the full action of the abstraction operator, it is also necessary to construct all the variants for putting out of brackets for each of the nine previously obtained results. We write out all the options, but in a more compressed form.

Result №1 (a (aab) (aab)) :

1.1. (a ( a ab) ( a ab)) => (a a (aabb)) .
1.2. (a ( aa b) ( aa b)) => (a aa (bb)) .
1.3. (a (a ab ) (a ab )) => (a ab(aa)) . * No. 7
1.4. (a ( aab ) ( aab )) => (a aab ) .
1.5. (a (aa b ) (aa b )) => (a b (aaaa)) .
Result number 2 (a (b) (aab)) :
2.1. (a ( b ) (aa b )) => (a b (aa)) .
Result number 3 (a (aa) (aabb)) :
3.1. (a ( a a) ( a abb)) => (a a (aabb)) . * No. 1.1
3.2. (a ( aa ) ( aabb)) => (a aa (bb)) . * # 1.2.
Result # 4 (aa (aa) (bb)) .
Result # 5 (ab (aa) (aa)) :
5.1. (ab ( a a) ( a a)) => ( a ab (aa)) . * No. 7, * No. 1.3
5.2. (ab ( aa ) ( aa )) => ( aa ab) . * # 1.4.
Result # 6 (b (aa) (aaaa)) :
6.1. (b ( a a) ( a aaa)) => ( a b (aaaa)) . * No. 1.5
6.2. (b ( aa ) ( aa aa)) => (aa b (aa)) . * No. 7, * No. 1.3, * No. 5.1
Result No. 7 (aab (aa)) .
Result # 8 (a (aaabb)) .
Result # 9 (aa (bb)) .

An asterisk indicates options that repeat. Only unique variations are included in the result of the abstraction. In the example that was parsed, there are fourteen unique result rows. Total:

abstraction [((aa) (aab) (aab))] =
{
(a (aab) (aab)), (aa (aabb)), (aaa (bb)), (aaab), (a (b ) (aab)), (ab (aa)), (a (aa) (aabb)), (aa (aa) (bb)), (ab (aa) (aa)), (b (aa) (aaaa )), (ab (aaaa)), (aab (aa)), (a (aaabb)), (aa (bb))
}

For clarity, consider a couple of examples.

String ((a (b)) (a (b))). Options for making brackets. The first iteration:

(( a (b)) ( a (b))) => ( a ((b) (b))) , the result is №1.
((a (b) ) (a (b) )) => ( (b) (aa)) , the result is №2.
(( a (b) ) ( a (b) )) => ( a (b) ) , the result is №3.
In the first result, we can make one more rendering. The second iteration:
(a (( b ) ( b ))) => (a ( b )) , the result №1.2 coincides with the result №3.

Total: abstraction[((a (b)) (a (b)))] = {(a ((b) (b))), ((b) (aa)), (a (b))}

Great example:
abstraction[deduction[(a(b))]] = abstraction[((aa(bb)(bb))(aa(bb)(bb)))] =>
1. ( (aa(bb)(bb)) (aa(bb)(bb)) ) => ( (aab(b)) (aa(bb)(bb)) ).
1.1. ( (aab(b))(aa(bb)(bb)) ) => ( a(aab(b)(bb)(bb)) ).
1.1.1. ( a(aab(b)(bb)(bb)) ) => ( a(aabb(b)(bb)) ).
1.1.1.1. ( a(aabb(b)(bb)) ) => ( a(aabbb(b)) ).
1.1.2. ( a(aab(b)(bb)(bb)) ) => ( a(aabb(bb)) ).
1.1.3. ( a(aab(b)(bb)(bb)) ) => ( a(aabb(b)(bb)) ).
1.1.3.1. ( a(aabb(b)(bb)) ) => ( a(aabbb(b)) ).
1.1.4. ( a(aab(b)(bb)(bb)) ) => ( a(aabbb(b)) ).
1.2. ( (aab(b))(aa(bb)(bb)) ) => ( aa(b(b)(bb)(bb)) ).
1.2.1. ( aa(b(b)(bb)(bb)) ) => ( aa(bb(b)(bb)) ).
1.2.1.1. ( aa(bb(b)(bb)) ) => ( aa(bbb(b)) ).
1.2.2. ( aa(b(b)(bb)(bb)) ) => ( aa(bb(bb)) ).
1.2.3. ( aa(b(b)(bb)(bb)) ) => ( aa(bb(b)(bb)) ).
1.2.3.1. ( aa(bb(b)(bb)) ) => ( aa(bbb(b)) ).
1.2.4. ( aa(b(b)(bb)(bb)) ) => ( aa(bbb(b)) ).
1.3. ( (aab(b)) (aa(bb)(bb)) ) => ( (aab(b)) (aab(bb)) ).
1.3.1. ( (aab(b)) (aab(bb)) ) => ( a(aabb(b)(bb)) ).
1.3.1.1. ( a(aabb(b)(bb)) ) => ( a(aabbb(b)) ).
1.3.2. ( (aab(b)) (aab(bb)) ) => ( aa(bb(b)(bb)) ).
1.3.2.1. ( aa(bb(b)(bb)) ) => ( aa(bbb(b)) ).
1.3.3. ( (aab(b)) (aab(bb)) ) => ( aab((b)(bb)) ).
1.3.3.1. ( aab((b)(bb)) ) => ( aab(b(b)) ).
1.3.4. ( (aab(b)) (aab(bb)) ) => ( ab(aa(b)(bb)) ).
1.3.4.1. ( ab(aa(b)(bb)) ) => ( ab(aab(b)) ).
1.3.5. ( (aab(b)) (aab(bb)) ) => ( b(aaaa(b)(bb)) ).
1.3.5.1. ( b(aaaa(b)(bb)) ) => ( b(aaaab(b)) ).
1.4. ( (aab(b)) (aa(bb)(bb)) ) => ( (aab(b)) (aabb) ).
1.4.1. ( (aab(b)) (aabb) ) => ( a(aabbb(b)) ).
1.4.2. ( (aab(b)) (aabb) ) => ( aa(bbb(b)) ).
1.4.3. ( (aab(b)) (aabb) ) => ( aab(b(b)) ).
1.4.4. ( (aab(b)) (aabb) ) => ( ab(aab(b)) ).
1.4.5. ( (aab(b)) (aabb) ) => ( b(aaaab(b)) ).
2. ( (aa(bb)(bb)) (aa(bb)(bb)) ) => ( (aabb) (aa(bb)(bb)) ).
2.1. ( (aabb)(aa(bb)(bb)) ) => ( (aabb)(aab(bb)) ).
2.1.1. ( (aabb)(aab(bb)) ) => ( a(aabbb(bb)) ).
2.1.2. ( (aabb)(aab(bb)) ) => ( aa(bbb(bb)) ).
2.1.3. ( (aabb)(aab(bb)) ) => ( aab(b(bb)) ).
2.1.4. ( (aabb)(aab(bb)) ) => ( ab(aab(bb)) ).
2.1.5. ( (aabb)(aab(bb)) ) => ( b(aaaab(bb)) ).
2.2. ( (aabb)(aa(bb)(bb)) ) => ( (aabb)(aabb) ).
2.2.1. ( (aabb)(aabb) ) => ( a(aabbbb) ).
2.2.2. ( (aabb)(aabb) ) => ( aa(bbbb) ).
2.2.3. ( (aabb)(aabb) ) => ( aab(bb) ).
2.2.4. ( (aabb)(aabb) ) => ( abb(aa) ).
2.2.5. ( (aabb)(aabb) ) => ( aabb ).
2.2.6. ( (aabb)(aabb) ) => ( ab(aabb) ).
2.2.7. ( (aabb)(aabb) ) => ( b(aaaabb) ).
2.2.8. ( (aabb)(aabb) ) => ( bb(aaaa) ).
2.3. ( (aabb)(aa(bb)(bb)) ) => ( a(aabb(bb)(bb)) ).
2.3.1. ( a(aabb(bb)(bb)) ) => ( a(aabbb(bb)) ).
2.3.2. ( a(aabb(bb)(bb)) ) => ( a(aabbbb) ).
2.4. ( (aabb)(aa(bb)(bb)) ) => ( aa(bb(bb)(bb)) ).
2.4.1. ( aa(bb(bb)(bb)) ) => ( aa(bbb(bb)) ).
2.4.2. ( aa(bb(bb)(bb)) ) => ( aa(bbbb) ).
3. ( (aa(bb)(bb))(aa(bb)(bb)) ) => ( a(aa(bb)(bb)(bb)(bb)) ).
3.1. ( a(aa(bb)(bb)(bb)(bb)) ) => ( a(aab(bb)(bb)(bb)) ).
3.1.1. ( a(aab(bb)(bb)(bb)) ) => ( a(aabb(bb)(bb)) ).
3.1.1.1. ( a(aabb(bb)(bb)) ) => ( a(aabbb(bb)) ).
3.1.1.2. ( a(aabb(bb)(bb)) ) => ( a(aabbbb) ).
3.1.2. ( a(aab(bb)(bb)(bb)) ) => ( a(aabb(bbb)) ).
3.1.3. ( a(aab(bb)(bb)(bb)) ) => ( a(aabbb(bb)) ).
3.1.4. ( a(aab(bb)(bb)(bb)) ) => ( a(aabbb) ).
3.2. ( a(aa(bb)(bb)(bb)(bb)) ) => ( a(aabb(bb)(bb)) ).
3.2.1. ( a(aabb(bb)(bb)) ) => ( a(aabbb(bb)) ).
3.2.2. ( a(aabb(bb)(bb)) ) => ( a(aabbbb) ).
3.3. ( a(aa(bb)(bb)(bb)(bb)) ) => ( a(aab(bbb)(bb)) ).
3.3.1. ( a(aab(bbb)(bb)) ) => ( a(aabb(bbb)) ).
3.3.2. ( a(aab(bbb)(bb)) ) => ( a(aabbb(b)) ).
3.4. ( a(aa(bb)(bb)(bb)(bb)) ) => ( a(aab(bbbb)) ).
3.5. ( a(aa(bb)(bb)(bb)(bb)) ) => ( a(aabb(bb)) ).
3.6. ( a(aa(bb)(bb)(bb)(bb)) ) => ( a(aabb) ).
4. ( (aa(bb)(bb))(aa(bb)(bb)) ) => ( aa((bb)(bb)(bb)(bb)) ).
4.1. ( aa((bb)(bb)(bb)(bb)) ) => ( aa(b(bb)(bb)(bb)) ).
4.1.1. ( aa(b(bb)(bb)(bb)) ) => ( aa(bb(bb)(bb)) ).
4.1.1.1. ( aa(bb(bb)(bb)) ) => ( aa(bbb(bb)) ).
4.1.1.2. ( aa(bb(bb)(bb)) ) => ( aa(bbbb) ).
4.1.2. ( aa(b(bb)(bb)(bb)) ) => ( aa(bb(bbb)) ).
4.1.3. ( aa(b(bb)(bb)(bb)) ) => ( aa(bbb(bb)) ).
4.1.4. ( aa(b(bb)(bb)(bb)) ) => ( aa(bbb) ).
4.2. ( aa((bb)(bb)(bb)(bb)) ) => ( aa(bb(bb)(bb)) ).
4.2.1. ( aa(bb(bb)(bb)) ) => ( aa(bbb(bb)) ).
4.2.2. ( aa(bb(bb)(bb)) ) => ( aa(bbbb) ).
4.3. ( aa((bb)(bb)(bb)(bb)) ) => ( aa(b(bbb)(bb)) ).
4.3.1. ( aa(b(bbb)(bb)) ) => ( aa(bb(bbb)) ).
4.3.2. ( aa(b(bbb)(bb)) ) => ( aa(bbb(b)) ).
4.4. ( aa((bb)(bb)(bb)(bb)) ) => ( aa(b(bbbb)) ).
4.5. ( aa((bb)(bb)(bb)(bb)) ) => ( aa(bb(bb)) ).
4.6. ( aa((bb)(bb)(bb)(bb)) ) => ( aa(bb) ).
5. ( (aa(bb)(bb))(aa(bb)(bb)) ) => ( (bb)(aaaa(bb)(bb)) ).
5.1. ( (bb)(aaaa(bb)(bb)) ) => ( (bb)(aaaab(bb)) ).
5.1.1. ( (bb)(aaaab(bb)) ) => ( b(aaaab(bb)) ).
5.2. ( (bb)(aaaa(bb)(bb)) ) => ( (bb)(aaaabb) ).
5.2.1. ( (bb)(aaaabb) ) => ( b(aaaabb) ).
5.2.2. ( (bb)(aaaabb) ) => ( bb(aaaa) ).
6. ( (aa(bb)(bb))(aa(bb)(bb)) ) => ( (bb)(bb)(aaaa) ).
6.1. ( (bb)(bb)(aaaa) ) => ( b(bb)(aaaa) ).
6.2. ( (bb)(bb)(aaaa) ) => ( bb(aaaa) ).
7. ( (aa(bb)(bb))(aa(bb)(bb)) ) => ( a(bb)(aa(bb)(bb)) ).
7.1. ( a(bb)(aa(bb)(bb)) ) => ( a(bb)(aab(bb)) ).
7.1.1. ( a(bb)(aab(bb)) ) => ( ab(aab(bb)) ).
7.2. ( a(bb)(aa(bb)(bb)) ) => ( a(bb)(aabb) ).
7.2.1. ( a(bb)(aabb) ) => ( ab(aabb) ).
7.2.2. ( a(bb)(aabb) ) => ( abb(aa) ).
8. ( (aa(bb)(bb))(aa(bb)(bb)) ) => ( aa(bb)((bb)(bb)) ).
8.1. ( aa(bb) ((bb)(bb)) ) => ( aa(bb)(b(bb)) ).
8.1.1. ( aa(bb)(b(bb)) ) => ( aab(b(bb)) ).
8.2. ( aa(bb) ((bb)(bb)) ) => ( aa(bb)(bb) ).
8.2.1. ( aa(bb)(bb) ) => ( aab(bb) ).
8.2.2. ( aa(bb)(bb) ) => ( aabb ).
9. ( (aa(bb)(bb))(aa(bb)(bb)) ) => ( a(bb)(bb)(aa) ).
9.1. ( a(aa)(bb)(bb) ) => ( ab(aa)(bb) ).
9.2. ( a(aa)(bb)(bb) ) => ( abb(aa) ).
10. ( (aa(bb)(bb))(aa(bb)(bb)) ) => ( a(bb)(bb) ).
10.1. ( a(bb)(bb) ) => ( ab(bb) ).
10.2. ( a(bb)(bb) ) => ( abb ).

Из представленного выше списка результирующих(справа от стрелки) строк нужно выбрать все уникальные, и это множество уникальных строк будет результатом работы abstraction[((aa(bb)(bb))(aa(bb)(bb)))]. Выписывать уникальные строки не стану, так как это ничего не добавит объяснению. Ниже, при рассмотрении вопроса оптимизации и практического использования алгоритма, я буду ссылаться на этот пример.

Composition operator . Definition ⊂ ∀U S , U ≠ ∅, composition [U] ≠ ∅, composition [U] ∈ S . It accepts multiple rows as input, and returns one row. Act. The operator prepares content for the next iteration of the algorithm. After the action of the abstraction operator, many lines appear, and at the composition stage, the string is selected and concatenated for the next iteration of the algorithm. In more detail, I will discuss this issue in the sections of optimization and practical use. In the simplest case, the composition operator performs a simple concatenation of all abstraction results. So we define it. Example: composition [ abstraction [((a (b)) (a (b)))]] =composition [{(a ((b) (b))), ((b) (aa)), (a (b))}) = ((a ((b) (b))) ((b) ( aa)) (a (b))) .

## Algorithm Properties

The algorithm produces strings. The set of all lines that are formed as a result of the iterative work of the algorithm will be called “algorithm output” or simply “output”. Definition of output. T s ≝ {t n | t n + 1 = composition [ abstraction [ deduction [t n ]]]; t 0 = s; t n , s ∈ S ; n ∈ N }. T s - output for the seed s . In those cases when T is without a parameter, we are talking about output without reference to the seed. Output property: ∀s, e ∈ S, s ≠ ∅, e ≠ ∅, s ≠ e, T sT e = ∅ . This means that each output element uniquely corresponds to the seed. As a result, for each seed, the output is unique.

Meaningful interpretation of deduction and abstraction. The physical meaning of the deduction operator is as follows. From the original line, in a universal way, the deduction operator creates a fundamentally new, constructively object, with fundamentally new and more complex internal properties. In an intuitive approximation, we can say that deduction adds qualitatively new information. In turn, the abstraction operator parses the new object for parts and thus expresses the information added at the deduction stage in a constructive equivalent. You may notice that as a result of the procedure for putting off the brackets, there is a loss of information. Moreover, for this syntax, putting it out of the brackets is a universal way to meaningfully lose information in the absence of any a priori data about the meaning of strings. That is, from the point of view of the algorithm, all possible variants of information loss, which are calculated at the stage of abstraction, in fact, are the value of strings. Thus, at each step, the algorithm creates a new, unique syntactic heuristic. And each of the following heuristics is fundamentally more complex and more informative than the previous one. At each iteration of the algorithm, new knowledge appears.

## Practical use

Algorithm is a “thing in itself”. He thinks, but this is an “alien” mindset. In order to have practical benefits from alien thinking, it is necessary to find a common language with it. On the one hand, an alien needs to be trained, and on the other - to learn to understand him, in order to finally establish meaningful communication. In general, the interaction paradigm with the algorithm is similar to the well-known principles of interaction with the “black box”. Further, for more convenience, the thinking algorithm will be called the alien Kohl.

Consider the ideal case. Suppose that we have unlimited computing power at our disposal and can afford to calculate any number of iterations of Kohl’s thinking without worrying about optimization issues. In this case, the following ingredients will be required for interaction with Kolya:

1. A digital model of an interactive environment whose meaning is known.
2. Digital model of the instrument that can affect the environment.
3. Coding algorithm to encode signals from the medium by strings from s .
4. A decoding algorithm that will somehow fuzzy but rationally soundly decode previously unknown strings from S and convert them into signals for the instrument.

Having such components, it is possible to organize a universal training scheme with unknown motivation, which will allow Koli to adapt his thinking to the environment. Below is pseudocode.

``````S NextThought(S prevThought, S ExternalSignal, int exposure = 1){
S t = composition[prevThought, ExternalSignal];
for (int i = 0; i < exposure; i++)
t = composition[abstraction[deduction[t]]];
return t;
}
EnvironmentModel e;
S s = encode(e.GetState());
S o = ∅;
while (0)
{
S o = NextThought(o, s);
e.ImpactTool.perform(decode(o));
s = encode(e.GetState());
}
``````

Feedback Scheme. In the beginning Kolya has no thoughts. Kolya's first thought is the coded initial state of the medium. At each iteration in Kolya’s thoughts, an external signal is poured. After that, Kohl thinks during the exposure time. Thinking results are decoded and sent to the instrument. In turn, the action of the instrument somehow changes the state of the environment. And everything repeats again. For some time, Kolya’s thinking will adapt to the environment and he will begin to show signs of highly organized, subjectively motivated behavior. However, Kohli's motivation will remain unknown. In order to understand his motivation, at the next stage of training, it is necessary to set up experiments, that is, to purposefully change the environment and study Kolya's reactions to changes. If it is possible to describe Kolya’s desirable external behavior in terms of some objective function,

Проблема декодирования. It is necessary to decode Kohli's thoughts in order to synthesize the signal for the instrument. The difficulty is that each thought, as I noted in the previous section, is a fundamentally new design. That is, a hypothetical researcher will never be able to fully understand the content of Kolya’s thoughts. Part of the content produced by Kolya’s thinking, no matter how much it is studied, will forever remain something completely unclear. Only some of the most highly organized fragments of thinking can be consciously recognized, and this is a fundamental and insurmountable limitation in communication with Kolya. But in practical terms, this limitation is not fundamental. Because, firstly, one can infinitely clarify the meaningful side of Kolya’s thoughts, and secondly, there is no need to fully understand Kohl. It is enough to develop a common language within which one can explain on practical issues. On the technical side, the situation is as follows. The incoming signal describing the state of the environment is encoded in some language. Words and language statements are strings fromThe S . The language has its own vocabulary, syntax and semantics. From each iteration of thinking, the content according to grammatical criteria will be divided into several categories:

1. Fragments with unknown vocabulary.
2. Unknown syntax.
3. Unknown semantics.
4. Grammatically and semantically correct fragments.

The content of all specified categories according to the method of occurrence is arbitrary. That is, even in the case of grammatically correct fragments, this is an accident and it is not known what meaning Kohl puts into them, since his inner meaning is available only to himself. A priori, there are no criteria for correctly linking Kohli's thoughts and the corresponding actions of the instrument. And in this matter it remains to rely solely on Kohl himself. His behavior is arbitrary and only he himself can comprehend his motivation as the degree of organization of his thinking increases. In this case, any rational scheme for responding to Kolya’s thoughts would be acceptable and productive, the question is only in the relative effectiveness of various schemes. The basic option is to respond to all grammatically correct fragments even if they are absurd in content. Generally, All that can be converted into terms of the original encoding must be transformed and reacted. And so on until Kolya “wiser” to meaningful reactions. And, of course, a more flexible model of a medium with a large number of degrees of freedom will be useful. Wednesday, in some way, will be the body of Kolya.

The problem of limited computing power . In terms of the amount of computation, the algorithm is heavy. Clearly - several dozen iterations will exhaust the entire computing power of the planet. One can hope for quantum devices and for the fact that there is a quantum analogue of the algorithm, but so far there has been no one way out: instead of one enormously complex thought, in parallel to think many small and simple thoughts. There are several technical tricks for this:

1.At the composition stage, it is not necessary to include the whole set of abstractions in the result. In order for the algorithm to retain its fundamental properties, it suffices to select from the set only two independent resulting lines. The criterion of independence is a non-zero difference of the first digits in the hierarchical numbering of the results of abstraction. Refer to the big example, which is higher under the spoiler. All lines are numbered according to the abcd principle . A pair of lines with indices a1.b1.c1.d1 ... , a2.b2.c2.d2 ... is called independent if a1 ≠ a2. And this means that it is possible to split the entire result of abstraction into independent pairs, and for each pair in the next step, run your computational branch. Moreover, it is not necessary to use all the results of abstraction. In the minimum case, you can select only one pair of lines, and discard all others (irretrievably lose) and all principles of thinking will be preserved. Given the possibility of losing results, it is possible to organize an additional selection stage, in which, in some rational way, for example, according to statistical significance, content can be selected for further calculations.

2The second trick is based on the assumption that the more deeply located the brackets in the line, the less organized the content. Accordingly, the content “pop-up” as a result of braking is more organized and abstract from the point of view of the internal meanings of Kolya, which means that deep levels of nesting can be removed. Thus, the amount of computation at the next iteration is exponentially reduced. In the intuitive understanding of this procedure allows you to approximate only the most abstract part of thinking.

3As a result of parallelization into many smaller branches, the computations will grow “wide”. This width can be absolutely limited by selection not only at the level of individual computational branches, but also entirely in the entire array of parallel branches. This can be done through a common pool of fixed size, from where each branch will draw lines for the next iteration, and, accordingly, where it will drop the results. And for strings, you can absolutely limit the allowable nesting level of brackets. Such a combined approach will allow to restrain and regulate the growth of the volume of calculations.

## Interpretation and comments

The evidence . There is no evidence and can not be. Any theory of thinking is a matter of definition. The presented algorithm is a constructive theory of thinking. And therefore he is an axiom. The thinking algorithm is apodictically recognizable for the subject of thinking. Recognition can be facilitated by resorting first to non-constructive axiomatics which is more intuitive, and then to find the coincidence of the properties of constructive and non-constructive definitions.

Nonconstructive definition of thinking. Thinking is not an algorithmic production of content. In an intuitive sense, non-algorithmic phenomena have the following specific features: independence, spontaneity, uniqueness, self-organization, arbitrariness, subjectivity, complexity, fundamental unpredictability and uncertainty, the absence of conceptual barriers and, in the widest sense, the inalienable and timeless possibility of fundamental novelty. All the listed signs are somehow inherent in the described algorithm of thinking. Although the combination of algorithm and non-algorithmic properties is something non-intuitive and seemingly contradictory, in fact there is no contradiction. The algorithm deploys content using well-defined algorithmic procedures, but in the process of deployment the content is not an algorithmic organization.

Additional points of view on the algorithm . Algorithm thinking is, including:

1. Constructive implementation of metaphor. Thinking is essentially metaphorical. There are no other meanings than portable ones (possible). However, in the figurative sense, literal meanings (algorithms) are possible.
2. The model of self-organization of absolute chaos. Model of conceptual spontaneity.
3. A model of absolutely independent, subjectively motivated behavior. Model of creativity.
4. Self-organizing language.
5. Model of constructive approximation, for non-constructive, purely possible semantics.

Consciousness. The issue of consciousness is also solved at the level of definition. Consciousness is something that lies outside of any conceptual constraints. In the light of such a definition, consciousness can only be poisoned by more or less complex bikes, each of which will reflect some possibilities of consciousness, but none of them will be true. At the same time, consciousness stories have different heuristic potential. Of all the tales, the ones that are harder are more useful. From the point of view of algorithms, consciousness is a trans-algorithmic, infinitely complex (or simply complex) object. A bike of consciousness can be written using the algorithm. It sounds like this:

lim n → ∞ [t n + 1 = composition [ abstraction [ deduction [t n ]]]]; t0 = s; t n , s ∈ S ; n ∈ N