A childhood dream or finding an alternative to a Turing machine

    In the year around 1993 (I was in the 2nd grade then), after watching on a video recorder with an “underground” translation of the movie “Terminator 2: Judgment Day”, I had a childhood dream to make such a robot that could not only fight, but also to do homework in the Russian language for me in my handwriting so that the teacher would not notice (I did not like this subject wildly then).
    Time has passed, but even now, without exaggeration, we can say that the capabilities of artificial intelligence inherent in the so-called neuroprocessor of the terminator robot, whose role was played by Arnold Schwarzenegger, are still fantastic. After all, it is obvious that in order for any problem to be solved with the help of computer technology, it must first of all be formalized. And since as of today in the World there is no single and complete formal description of artificial intelligence, this issue remains unresolved. And so far, the very expression “artificial intelligence” is more of a subjective nature, applicable only to individual tasks (well, this is my personal opinion, maybe I'm wrong). But, even if, nevertheless, the processes occurring in the human brain succeed, describe using mathematical formulas, that is, just find the very way to formalize artificial intelligence, it is unlikely that the capabilities of modern computer technology will allow it to be realized. The point here is that all formalized algorithms, as of today, can be implemented in two ways:
    • software implementation (on microprocessor technology);
    • hardware implementation (usually on programmable logic ).
    Microprocessors and programmable logic chips (FPGAs) themselves are two fundamentally different ways of implementing algorithms. There is also a third option - the use of systems on a chip, but this is nothing more than placing microprocessors (both hardware cores and soft cores ) and programmable logic on the same substrate , followed by the distribution of tasks between them, that is, a mixture of the first two options .
    Nowadays, of course, the very same (though not, not the same) neural processors (about one of them is written here ) have already appeared , although, in fact, an artificial neuron- nothing more than a mathematical model, or just the very attempt to formalize the work of a biological neuron, which, again, is realized either programmatically using processors or hardware using programmable logic, or using ASIC (roughly speaking, the same thing as FPGAs, only in FPGAs the connections between logical gates are set programmatically, and in ASIC - hardware), as you see, nothing fundamentally new.
    Microprocessors and FPGAs are two completely different topics, and hereinafter we will talk about microprocessors.
    I think it’s not a secret for anyone that almost all modern processors represent one way or another way of implementing (this, of course, is very arbitrary, but nonetheless) Turing machines. Their feature is that the next, for execution, command is selected from memory sequentially, its address, at the same time, is generated in succession by the counter of commands, or is contained in certain fields of the previous command. And it is obvious that processors working according to such principles are even farther away from the capabilities of that terminator than a copper pot on foot to China. And then the question arises: “Is there an alternative to this method of implementing software algorithms that could bring machine computing to a fundamentally new level, without going beyond the capabilities of modern electronics?". In universities, in specialties, one way or another related to computer technology, they give a course of lectures that discuss the organization of computers and systems, and, among other things, this course tells that all existing mechanisms for software implementation Algorithms are executed as follows:
    • the next command or a set of independent commands of a sequence is read from memory and executed when the previous command (a set of independent commands) is executed, as mentioned above, most modern processors work this way, implementing the principles of a Turing machine;
    • a command is read from memory and executed when all its operands are available (stream calculations);
    • the commands are combined into procedures, internally, each of which is executed as in the first case (that is, inside each of these procedures, the commands are executed sequentially according to the principles of the Turing machine), while, externally, the procedures themselves are executed as in the second case, that is, as availability and availability of all necessary operands (macro-stream computing, which is a mixture of the first two options);
    • the command is read from memory and executed when other commands require the results of its execution (reduction calculations).

    As it was said in one of the publications posted on the habrahabr.ru website , a more detailed link to which will be presented later in the text, any computational algorithm is a set of mathematical formulas by the means of which it is implemented. All from the same publication we take the formula presented in it as an example: g = e * (a + b) + (ac) * f, and compose a block diagram of the algorithm for the classical processor with the following structure (simplified and very conditional (for example) will do)):
    image
    "Our" processor has a Harvard architecture and consists of two general-purpose registers, an arithmetic-logic device, a register - a battery, a counter and a command decoder. Moreover, the value “either from the general-purpose register or from the register-accumulator can be sent to the input port“ A ”of ALU. This is implemented using a multiplexer. So, the block diagram of the algorithm for this processor will look like this:
    image
    Each operational block in the block diagram is an analog of a cell on a tape in a Turing machine, and the command address counter is an analog of a read head. Total, for the calculation taken as an example of the formula, you will need to count 13 cells (perform 13 actions).
    As for stream computing, hereIt tells what kind of calculations it is and how they are implemented. For our case (taken as an example of a formula), the graph of stream calculations (an analog of the flowchart of the algorithm) will have the following form:
    image
    For a stream processor with the following structure:
    image

    calculations with the distribution of instructions will look like this:

    image
    Comparatively, not so long ago, one of domestic companies announced the completion of the development of a processor with such an architecture. This company is called Multiclet OJSC . In their development, each processor element is called a cell, hence the name - multicellular processor. There are many publications on habrahabr.ru devoted to this processor, for example , this. This publication is the one that I promised to give a link to earlier, and with which I took the formula and the graph of stream calculations as an example.
    In general, when OJSC Multiklet announced the development of such an architecture, this news was presented in such a way that I thought that now a revolution will occur in the market of computer systems, but nothing happened. Instead, publications on the shortcomings of multicellular architecture began to appear on the Internet. Here is one of them. But, for all that, the development turned out to be working and competitive.
    Another interesting way of organizing computations, different from the Turing machine, is reduction computations. In the previous case, each command is executed when all its operands are available, but with such an approach there may be a situation where the results of the executed command may not be needed further, then it turns out that time and hardware resources were wasted. In reduction calculations, this drawback was overcome by the fact that the execution of a command is initiated by a request for its results. In the mathematical basis of such calculations are λ-calculus . To calculate our formula, the whole process begins with a query on the result of g, which, in turn, will generate requests for operations e * (a + b) and (ac) * f, and these operations will form requests for calculating the values ​​a + b and ac, etc.:
    image
    By themselves, reduction computations consist of the processes of recognition of redexes with their subsequent replacement by the results of executed, previously requested, commands. As a result, all calculations are reduced to the desired result. Nowhere, neither in literature, nor on the Internet did I find descriptions of real processors with such an architecture, maybe I searched poorly ...
    Currently, research is already underway to create computer systems that work on fundamentally new principles, such as quantum computing , photon computingetc. It is obvious that computers created using such technologies will surpass all modern ones and bring schoolchildren closer to the moment when it will be possible to make the piece of iron do homework for themselves in Russian. But the question of creating a fundamentally new way to implement machine computing, without going beyond the scope of modern electronics , still remains relevant.

    Also popular now: