Pitfalls of Silicon Electronics. Problems and solutions
Welcome to our valued readers on the iCover Blog Pages ! With a high degree of probability, we can say that the monopoly of silicon chips in the near future is unlikely to be challenged. Being the second most abundant element after oxygen on Earth, today it is regarded as an integral component of our earthly civilization. At the same time, further miniaturization of silicon transistors, as the basis of existing computing devices, is associated with a number of technological problems, which forces scientists to look for an alternative to this seemingly irreplaceable material. We will describe in what directions the search is conducted and how successful the steps taken are in our article.

Silicon electronics has completely changed our world, determining the possibility of creating a single information space. Quartz and river sand, silicon (Si), which is present on Earth in enormous quantities and belonged to useless and capricious materials at the end of the 40s of the last century, gave us the opportunity to create electronic devices and information technologies, turning into the engine without which our civilization in its present form could never exist.
Those revolutionary changes that were implemented in the fields of information and computing systems and occurred literally during the life of one generation of mankind became possible due to the continuous miniaturization of the transistor - a key "workhorse" of solid-state electronics, which at one time replaced electronic vacuum tubes and mechanical relay switch. It is these switches that have found application in the circuit of the first binary electromechanical computer Z1, created in 1938 by Konrad Zuse.
Let's try to ask a question: how long will the miniaturization of the transistor size, accompanied by an increase in processor performance, be technologically and economically justified? Reducing the size of transistors allowed to increase their number on one chip to 100,000 (1.5 micron technology) in 1982, to 100,000,000 (90 nm technology) by 2003, and to almost 10,000,000,000 today. The processor clock speeds were also steadily growing based on the number of operations per second. Only at the stage from 1982 to 2003, the increase was: from 10 MHz in 1982 to 4 GHz in 2003, after which this value practically did not grow. Why?
The reason for the seeming logical discrepancy lies in the fundamental principle of the functioning of modern processors, which involves encoding binary bits in the form of an electron charge on the capacitor plates (the energy equivalent of the bit is equal to E = CV² / 2, where C is the capacitance and V is the voltage on its plates) . That is, when performing any manipulations with bits, a computing device will need enough energy to distinguish the bit value from thermal noise. At the same time, in all information processing systems existing today, any changes in the state of the bit are accompanied by the “emission” of a certain amount of thermal energy. With an increase in the clock frequency, the frequency of release of energy portions increases, while maintaining the size of the chip.
Of course, the development of technology can reduce the size of capacitors and useful voltages, but this process is not able to compensate for the inevitable increase in power density. This, not too rational approach, paid off until the problem of heat removal required drastic measures.
In order to visualize the magnitude of the problem, we recall that the 8086 microprocessor, manufactured in 1978 using three-micron technology, used the capabilities of 29,000 transistors, and, operating at a frequency of 4.77 MHz, did not require a radiator, since the amount of heat dissipated did not exceed 1.5 watts. The Pentium 4 Prescott processor, released in 2004 using the 90 nm process technology, operating at 3 GHz used the capabilities of 125 million transistors and produced 100 watts of thermal energy. And here, the developers came close to the limit of power that can be diverted using an air cooling scheme. It is in this connection that the laptop lying on our knees burns our legs and the desktop becomes part of the heating system. At the level of a modern supercomputer, consuming about 5 megawatts of power (equivalent to 1000 four-burner electric stoves operating simultaneously at full power), a specially cooled room will be required. And the Google data center, which is being built in cold Norway, will already consume 200 megawatts and will be cooled by water from the nearest fjord.

Illustration of the increase in power density dissipated on a chip, according to data from 1970 to 2012 (University of Notre Dame, USA)
Here it will be appropriate to quote the International Technology Roadmap for Semiconductors (“International Roadmap for Semiconductor Technologies”): “Power management is now the primary issue across most application segments due to the 2 × increase in transistor count per generation while cost-effective heat removal from packaged chips remains almost flat (ITRS 2013). " Which means: power distribution control has become a problem for most applications, due to the fact that doubling the number of transistors is not accompanied by an equivalent increase in the effective heat removal from the chip, remaining almost at the same level.
In other words, the density of transistors on an air-cooled chip today is such that their simultaneous use will lead to melting of the chip. This explains the need to use dark silicon modes, when some spatial area of the chip for a while passively “falls asleep” without passing current through itself and not generating heat. It is also important to consider that further miniaturization of transistors to less than 10 nm (of the order of 20 silicon atoms) exacerbates the heat removal problem due to leakage problems resulting from quantum-mechanical tunneling (the so-called “passive” leakage).
Another problem is economic feasibility. Single transistors of this size are not a problem to manufacture using electronic lithography technology. When it comes to mass production, which implies serious costs at each stage of the technological cycle without special prospects of reducing the cost of the production process, the justification of investments in such production is called into question.
It should be noted that today technologies are presented that open up the possibility for development in the horizontal direction, without the need to reduce the size of devices. This principle is used in circuits with multi-core processors running in parallel, using several chips on the same chip. Another example of a temporary way out of the deadlock is the system on a chip concept, which is being actively developed today, which involves the production of specialized processors for specific tasks. At the same time, improvisation with architecture in this case can hardly be called a fundamental solution, since an alternative to “bricks” (silicon field-effect transistors) is not supposed.
Is there any reason to talk about the technological limit? Yes, it seems. And this is confirmed by an active search for alternative solutions.
One way is to find a solution within the framework of standard binary digital logic, which allows to improve the parameters of modern silicon field-effect transistors and to minimize the useless passive dissipated power that is released due to leaks. Certain hopes along this path are associated with tunneling field effect transistors (TFETs) based on the principle of quantum mechanical tunneling. An alternative solution is the use of nanoelectromechanical relays, ideally without leaks (a kind of prototype of the Konrad Zuse relay idea, in a nanoscale interpretation).
One of the contenders under consideration for the role of silicon substitution is today considered graphene. Today, graphene demonstrates the properties of a semiconductor as a transistor in the form of "nanoribbons", which are not suitable for mass production, therefore, at the current stage of technological development, silicon cannot be considered a serious competitor.
So far, the so-called. “Spintronics” is the principle according to which it is proposed to consider not the electron charge, but its spin (internal magnetic moment) as the equivalent of a binary variable. Being extremely costly to implement, such a technology at the current stage of its implementation does not demonstrate fundamental advantages in comparison with silicon solutions.
Of course, one of the priority and most promising areas in the search for an alternative for the past 15 years has been a quantum computer and everything connected with it. The practical implementation of such a solution faces several problems at once. The first is to produce a processor with a sufficient number of qubits to achieve its goals is still problematic, and the prospects are rather vague. On the other hand, it was not so easy to determine the range of tasks that quantum computers could solve better than existing computers. With the exception of factorization of numbers and a number of other specific problems, the advantages of quantum computers in solving everyday problems do not seem obvious. At the same time, promising quantum computers are of most interest when using quantum systems in calculations.
Are alternative solutions being considered? Yes, and for this it is worth asking a question: is it possible to minimize the energy release of a bit, accompanied by heat generation?

Illustration of theoretically achievable values of the dissipation power on a chip depending on the clock frequency of the microprocessor for cases of conventional and reverse logic. (University of Notre Dame, USA)
The dependencies shown in the graph are not the only possible ones. An example is the so-called. “Adiabatic reverse logic” (ARL), based on the Landauer principle, following which the conversion of bit energy into thermal energy is inevitable only in case of erasing information. Accordingly, if the erasure is replaced by a recycling process in which most of the bit energy is returned to the power source, then significant losses, it turns out, can be avoided.
As studies have confirmed, in the ARL mode, the chip can be made to work so that the amount of heat generated is orders of magnitude less than that released in chips operating on standard non-reversible logic. Of course, it is necessary to compensate for the realized advantage of the architecture within the chip due to auxiliary systems that are used exclusively for the distribution of energy resources. The role of a key element of such a system can be played by microelectromechanical resonators, used as local “distributors” of energy, working both to transfer energy to the consumer (transistor circuit) and to receive energy from these devices in the opposite direction. This energy can be reused (recycle). This increases the volume of operations, which the device can execute per unit of time. An example of the advantages of such a solution is a smartphone that will be enough to charge once a year or increase the computing power of a laptop tens or even hundreds of times within the same energy budget.
At the same time, although the use of ARL seems to be a rather promising solution that allows the use of existing technologies, talking about a fundamental breakthrough is clearly premature here, since it is assumed to use the same transistors, with their inherent disadvantages, which manifest themselves with a decrease in their size (so, for example, ARL does not demonstrate advantages in solving the key problem of passive quantum mechanical leakage).
Is it possible to do without transistors in principle? Let's try to consider an option using the concept of so-called. QCA (quantum dot cellular automata = quantum dot cellular automaton).
A cellular machine was called a computing device, which consists of a set of homogeneous “cells”, very similar to the cubes of the children's designer “Lego”, from which it is possible to assemble quite complex devices. Each such cell at each fixed point in time is in one of two states. And changes in the state of a cell over time can be logically linked to its previous state or to the states of its neighbors (in the "vicinity" of the cell). In 1993, Wolfgang Porod and Craig Lent of the University of Notre Dame (USA) proposed a physical prototype of such an automaton, whose work used the concept of electrostatic coupled quantum dots.

An illustration of two states of a QCA cell and a complete binary adder assembled from such cells. Red circles are quantum dots filled with electrons. These electrons are able to tunnel between the white circles - quantum dots. Due to Coulomb repulsion, electrons are located in one of two energetically equivalent diagonal states, which encode binary zero and one. (University of Notre Dame, USA)
The minimum cell (“cell”) in the QCA architecture is composed of four quantum dots that are located in the corners of the square. In each cell, four electrons have two electrons. The Coulomb repulsion determines their position in mutually opposite corners of the square, and, as a result, two “diagonal” locations of electrons in this interpretation will correspond to two states with equal energy, interpreting the unit or zero of the binary code.
How it works. (University of Notre Dame, USA)
QCA architectures are based on simple rules of interaction between cells localized on the chip surface and simultaneously implementing the idea of cellular automaton and quantum mechanics. This architecture allows you to create nanodevices that combine great computing power with extremely low power consumption. Moreover, it is very important that on the basis of QCA it is possible to create not only productive energy-efficient processors with traditional deterministic binary logic, but also use similar architectural solutions to create quantum computers.

Illustration of a single-electron transistor with metal electrodes and oxide tunnel junctions with an area of about 4000 square meters. nm (University of Notre Dame, USA)
As a reader for the cellular automaton QCA, the so-called “One-electron” transistor is a nanoelectronic device that allows you to track the moment of switching one electron in a cell. The current prototype of such a device in which metal “quantum dots” were used at low temperatures (~ 100 mK) was first demonstrated back in 1997. The role of a battery for such a processor can be performed by a multiphase clock generator on a chip, made on the basis of a resonator and capable of both transmitting and absorbing energy. By the way, a one-electron logic circuit (“one-electron parametron”) with a clock generator, which found application in the QCA architecture, was proposed by A. Korotkov and K. Likharev.
One of the cornerstones is that the characteristic energy scale of the barrier dividing binary 0 and 1 in electronic QCA depends on a parameter that, in turn, is determined by the electrical capacitance of the system, C: E = e² / C. Accordingly, in order for such a QCA-scheme to be able to work at room temperature, the permissible cell sizes should not exceed 5 nanometers. At the same time, the possibility of creating and successfully functioning such a (single) cell at room temperature was first demonstrated in 2009 by a group of specialists led by Robert Walkow. Unfortunately, at the commercial level, the solutions received were never implemented.

Illustration of a full binary adder on nanomagnets measuring 80 x 60 sq. nm MSM image on the right, electron micrograph on the left. The formed magnetic poles are displayed in bright and dark colors. (University of Notre Dame, USA) The
principles underlying the QCA concept make it possible to create not only nanoelectronic, but also nanomagnetic processors, the role of Lego cubes in which are played by nanomagnets with two fixed directions of magnetization and less than 100 nm in size. The logical nodes of such devices, made of permalloy, were demonstrated back in 2006 and were able to successfully combine the functions of both memory and the logical device.
In the future, the operation of such cellular automata will be ensured by the minimum switching powers and the use of an adiabatic reversible circuit, which will minimize the useless energy dissipation. Such automata can be assembled from specific molecules with single-bit cell elements inside them. Such solutions will allow to achieve a record density of elements within a single chip (up to 10¹² / cm²).

Illustration of a QCA cell formed on a molecular complex (University of Notre Dame, USA)
At the same time, despite the fact that chemical components are available for modeling such molecules, the molecules themselves can be created and their behavior carefully calculated, nobody has yet managed to implement something functional in a controlled way. It will require the development of fundamentally new ways to control the assembly processes of functional devices at the molecular level. And this question remains open so far.
Is it possible to implement a quantum bit (qubit) - the fundamental cell of a quantum computing system based on a miniature analogue of a traditional CMOS transistor?
This is precisely the question posed by a joint group of scientists from the Cambridge laboratory of Hitachi (Great Britain) and specialists from Japan, France and Ukraine engaged in the European project TOLOP (TOwards LOw Power information and communication technologies).
Scientists were able to show that transistors manufactured in accordance with CMOS technology (CMOS) can be reduced to such a size that they can cope with the tasks solved by qubits. In other words, they will be able to accept one of the two quantum states or to remain in a state of quantum superposition.
“We wanted to show that the same technology that is used for our computers can be used for quantum computing experiments,” says Fernando Gonzales-Zalba, who led a group of scientists and research.
During the experiment, the scientists managed to change, record and read the quantum state of the CMOS qubit through the gate of the field effect transistor.
To obtain a “CMOS-qubit”, scientists created field-effect transistors, the gate of which forms two right angles around the channel and surrounds it on three sides. The channel itself, located horizontally on a silicon base, is an elongated nanoconductor, in the central part of which there is a gate structure that acts as a control electrode.

The maximum value of the electric field concentration, concentrating around the periphery of the transistor channel, is recorded on the edges of the conductor. Using the effect of quantum tunneling at temperatures below 20K, it is possible to separate one electron and move it between quantum dots. Features of the distribution of electrons on opposite faces of the nanoconductor will allow you to set the desired quantum state of the qubit transistor. Under certain conditions, electrons are able to move in both directions simultaneously, which will correspond to the state of quantum superposition.

The state of a quantum superposition of qubit transistors can be set by applying an electric pulse with certain characteristics to the gate. The duration of the qubit in this state, as experiments have shown, is an interval of 100 picoseconds.
The gate of the transistor-qubit can also be used to read quantum states in real time. For this purpose, scientists have connected a transistor with an oscillatory LC circuit operating at a frequency of 350 megahertz. In the intervals when the transistor was in a superposition state or in one of two “significant” states, the electric capacitance of the quantum dots on the faces of the nanoconductor was insignificant, but changed. This, in turn, led to a change in the resonant frequency of the circuit, which can be measured by traditional methods.
An experiment conducted in the laboratory by a joint team of specialists allowed us to bring the time during which the CMOS transistor-qubit is able to store quantum information up to 100 picoseconds. At the same time, scientists in the near future intend to increase this indicator to 1 nanosecond - time sufficient to perform basic operations used in the processing of quantum information.
Another question that experts were able to answer is the possibility of ensuring the state of quantum entanglement of two or more qubits among themselves. In the case of a CMOS transistor, the solution is possible when they are placed at a minimum distance from each other or on the same nanoconductor, which will ensure electrostatic coupling between the electrons in adjacent transistors.
Thus, “If you perform an operation with electrons in one transistor, then it will inevitably affect the quantum state of the second transistor, and vice versa,” the authors of the experimental model explain. “Two qubits interacting in this way form the basis for creating the set of elements needed to build a functional quantum computer.”
More information on the scientific concept and the results of the experiment can be found on the pages of ACS Nano Letters .
Summarizing, we can say that Cambridge experts were able to experimentally prove that the quantum effects of tunneling and consistency in a circuit using a CMOS transistor can be used for good if you try to look at them from the point of view of a quantum computing system. And if the experimentally obtained results can be fixed at the level of final practical solutions, then the solemn wires of silicon and related technologies for a legally well-deserved rest, perhaps again, can be postponed indefinitely.
Source 1
Source 2
Dear readers, we are always happy to meet and wait for you on the pages of our blog. We are ready to continue to share with you the latest news, review materials and other publications, and will try to do our best to make the time spent with us useful for you. And, of course, do not forget to subscribe to our columns . Our other articles and events


Silicon electronics has completely changed our world, determining the possibility of creating a single information space. Quartz and river sand, silicon (Si), which is present on Earth in enormous quantities and belonged to useless and capricious materials at the end of the 40s of the last century, gave us the opportunity to create electronic devices and information technologies, turning into the engine without which our civilization in its present form could never exist.
Those revolutionary changes that were implemented in the fields of information and computing systems and occurred literally during the life of one generation of mankind became possible due to the continuous miniaturization of the transistor - a key "workhorse" of solid-state electronics, which at one time replaced electronic vacuum tubes and mechanical relay switch. It is these switches that have found application in the circuit of the first binary electromechanical computer Z1, created in 1938 by Konrad Zuse.
Let's try to ask a question: how long will the miniaturization of the transistor size, accompanied by an increase in processor performance, be technologically and economically justified? Reducing the size of transistors allowed to increase their number on one chip to 100,000 (1.5 micron technology) in 1982, to 100,000,000 (90 nm technology) by 2003, and to almost 10,000,000,000 today. The processor clock speeds were also steadily growing based on the number of operations per second. Only at the stage from 1982 to 2003, the increase was: from 10 MHz in 1982 to 4 GHz in 2003, after which this value practically did not grow. Why?
The reason for the seeming logical discrepancy lies in the fundamental principle of the functioning of modern processors, which involves encoding binary bits in the form of an electron charge on the capacitor plates (the energy equivalent of the bit is equal to E = CV² / 2, where C is the capacitance and V is the voltage on its plates) . That is, when performing any manipulations with bits, a computing device will need enough energy to distinguish the bit value from thermal noise. At the same time, in all information processing systems existing today, any changes in the state of the bit are accompanied by the “emission” of a certain amount of thermal energy. With an increase in the clock frequency, the frequency of release of energy portions increases, while maintaining the size of the chip.
Of course, the development of technology can reduce the size of capacitors and useful voltages, but this process is not able to compensate for the inevitable increase in power density. This, not too rational approach, paid off until the problem of heat removal required drastic measures.
In order to visualize the magnitude of the problem, we recall that the 8086 microprocessor, manufactured in 1978 using three-micron technology, used the capabilities of 29,000 transistors, and, operating at a frequency of 4.77 MHz, did not require a radiator, since the amount of heat dissipated did not exceed 1.5 watts. The Pentium 4 Prescott processor, released in 2004 using the 90 nm process technology, operating at 3 GHz used the capabilities of 125 million transistors and produced 100 watts of thermal energy. And here, the developers came close to the limit of power that can be diverted using an air cooling scheme. It is in this connection that the laptop lying on our knees burns our legs and the desktop becomes part of the heating system. At the level of a modern supercomputer, consuming about 5 megawatts of power (equivalent to 1000 four-burner electric stoves operating simultaneously at full power), a specially cooled room will be required. And the Google data center, which is being built in cold Norway, will already consume 200 megawatts and will be cooled by water from the nearest fjord.

Illustration of the increase in power density dissipated on a chip, according to data from 1970 to 2012 (University of Notre Dame, USA)
Here it will be appropriate to quote the International Technology Roadmap for Semiconductors (“International Roadmap for Semiconductor Technologies”): “Power management is now the primary issue across most application segments due to the 2 × increase in transistor count per generation while cost-effective heat removal from packaged chips remains almost flat (ITRS 2013). " Which means: power distribution control has become a problem for most applications, due to the fact that doubling the number of transistors is not accompanied by an equivalent increase in the effective heat removal from the chip, remaining almost at the same level.
In other words, the density of transistors on an air-cooled chip today is such that their simultaneous use will lead to melting of the chip. This explains the need to use dark silicon modes, when some spatial area of the chip for a while passively “falls asleep” without passing current through itself and not generating heat. It is also important to consider that further miniaturization of transistors to less than 10 nm (of the order of 20 silicon atoms) exacerbates the heat removal problem due to leakage problems resulting from quantum-mechanical tunneling (the so-called “passive” leakage).
Another problem is economic feasibility. Single transistors of this size are not a problem to manufacture using electronic lithography technology. When it comes to mass production, which implies serious costs at each stage of the technological cycle without special prospects of reducing the cost of the production process, the justification of investments in such production is called into question.
It should be noted that today technologies are presented that open up the possibility for development in the horizontal direction, without the need to reduce the size of devices. This principle is used in circuits with multi-core processors running in parallel, using several chips on the same chip. Another example of a temporary way out of the deadlock is the system on a chip concept, which is being actively developed today, which involves the production of specialized processors for specific tasks. At the same time, improvisation with architecture in this case can hardly be called a fundamental solution, since an alternative to “bricks” (silicon field-effect transistors) is not supposed.
Is there any reason to talk about the technological limit? Yes, it seems. And this is confirmed by an active search for alternative solutions.
One way is to find a solution within the framework of standard binary digital logic, which allows to improve the parameters of modern silicon field-effect transistors and to minimize the useless passive dissipated power that is released due to leaks. Certain hopes along this path are associated with tunneling field effect transistors (TFETs) based on the principle of quantum mechanical tunneling. An alternative solution is the use of nanoelectromechanical relays, ideally without leaks (a kind of prototype of the Konrad Zuse relay idea, in a nanoscale interpretation).
One of the contenders under consideration for the role of silicon substitution is today considered graphene. Today, graphene demonstrates the properties of a semiconductor as a transistor in the form of "nanoribbons", which are not suitable for mass production, therefore, at the current stage of technological development, silicon cannot be considered a serious competitor.
So far, the so-called. “Spintronics” is the principle according to which it is proposed to consider not the electron charge, but its spin (internal magnetic moment) as the equivalent of a binary variable. Being extremely costly to implement, such a technology at the current stage of its implementation does not demonstrate fundamental advantages in comparison with silicon solutions.
Of course, one of the priority and most promising areas in the search for an alternative for the past 15 years has been a quantum computer and everything connected with it. The practical implementation of such a solution faces several problems at once. The first is to produce a processor with a sufficient number of qubits to achieve its goals is still problematic, and the prospects are rather vague. On the other hand, it was not so easy to determine the range of tasks that quantum computers could solve better than existing computers. With the exception of factorization of numbers and a number of other specific problems, the advantages of quantum computers in solving everyday problems do not seem obvious. At the same time, promising quantum computers are of most interest when using quantum systems in calculations.
Are alternative solutions being considered? Yes, and for this it is worth asking a question: is it possible to minimize the energy release of a bit, accompanied by heat generation?

Illustration of theoretically achievable values of the dissipation power on a chip depending on the clock frequency of the microprocessor for cases of conventional and reverse logic. (University of Notre Dame, USA)
The dependencies shown in the graph are not the only possible ones. An example is the so-called. “Adiabatic reverse logic” (ARL), based on the Landauer principle, following which the conversion of bit energy into thermal energy is inevitable only in case of erasing information. Accordingly, if the erasure is replaced by a recycling process in which most of the bit energy is returned to the power source, then significant losses, it turns out, can be avoided.
As studies have confirmed, in the ARL mode, the chip can be made to work so that the amount of heat generated is orders of magnitude less than that released in chips operating on standard non-reversible logic. Of course, it is necessary to compensate for the realized advantage of the architecture within the chip due to auxiliary systems that are used exclusively for the distribution of energy resources. The role of a key element of such a system can be played by microelectromechanical resonators, used as local “distributors” of energy, working both to transfer energy to the consumer (transistor circuit) and to receive energy from these devices in the opposite direction. This energy can be reused (recycle). This increases the volume of operations, which the device can execute per unit of time. An example of the advantages of such a solution is a smartphone that will be enough to charge once a year or increase the computing power of a laptop tens or even hundreds of times within the same energy budget.
At the same time, although the use of ARL seems to be a rather promising solution that allows the use of existing technologies, talking about a fundamental breakthrough is clearly premature here, since it is assumed to use the same transistors, with their inherent disadvantages, which manifest themselves with a decrease in their size (so, for example, ARL does not demonstrate advantages in solving the key problem of passive quantum mechanical leakage).
Is it possible to do without transistors in principle? Let's try to consider an option using the concept of so-called. QCA (quantum dot cellular automata = quantum dot cellular automaton).
A cellular machine was called a computing device, which consists of a set of homogeneous “cells”, very similar to the cubes of the children's designer “Lego”, from which it is possible to assemble quite complex devices. Each such cell at each fixed point in time is in one of two states. And changes in the state of a cell over time can be logically linked to its previous state or to the states of its neighbors (in the "vicinity" of the cell). In 1993, Wolfgang Porod and Craig Lent of the University of Notre Dame (USA) proposed a physical prototype of such an automaton, whose work used the concept of electrostatic coupled quantum dots.

An illustration of two states of a QCA cell and a complete binary adder assembled from such cells. Red circles are quantum dots filled with electrons. These electrons are able to tunnel between the white circles - quantum dots. Due to Coulomb repulsion, electrons are located in one of two energetically equivalent diagonal states, which encode binary zero and one. (University of Notre Dame, USA)
The minimum cell (“cell”) in the QCA architecture is composed of four quantum dots that are located in the corners of the square. In each cell, four electrons have two electrons. The Coulomb repulsion determines their position in mutually opposite corners of the square, and, as a result, two “diagonal” locations of electrons in this interpretation will correspond to two states with equal energy, interpreting the unit or zero of the binary code.
How it works. (University of Notre Dame, USA)
QCA architectures are based on simple rules of interaction between cells localized on the chip surface and simultaneously implementing the idea of cellular automaton and quantum mechanics. This architecture allows you to create nanodevices that combine great computing power with extremely low power consumption. Moreover, it is very important that on the basis of QCA it is possible to create not only productive energy-efficient processors with traditional deterministic binary logic, but also use similar architectural solutions to create quantum computers.

Illustration of a single-electron transistor with metal electrodes and oxide tunnel junctions with an area of about 4000 square meters. nm (University of Notre Dame, USA)
As a reader for the cellular automaton QCA, the so-called “One-electron” transistor is a nanoelectronic device that allows you to track the moment of switching one electron in a cell. The current prototype of such a device in which metal “quantum dots” were used at low temperatures (~ 100 mK) was first demonstrated back in 1997. The role of a battery for such a processor can be performed by a multiphase clock generator on a chip, made on the basis of a resonator and capable of both transmitting and absorbing energy. By the way, a one-electron logic circuit (“one-electron parametron”) with a clock generator, which found application in the QCA architecture, was proposed by A. Korotkov and K. Likharev.
One of the cornerstones is that the characteristic energy scale of the barrier dividing binary 0 and 1 in electronic QCA depends on a parameter that, in turn, is determined by the electrical capacitance of the system, C: E = e² / C. Accordingly, in order for such a QCA-scheme to be able to work at room temperature, the permissible cell sizes should not exceed 5 nanometers. At the same time, the possibility of creating and successfully functioning such a (single) cell at room temperature was first demonstrated in 2009 by a group of specialists led by Robert Walkow. Unfortunately, at the commercial level, the solutions received were never implemented.

Illustration of a full binary adder on nanomagnets measuring 80 x 60 sq. nm MSM image on the right, electron micrograph on the left. The formed magnetic poles are displayed in bright and dark colors. (University of Notre Dame, USA) The
principles underlying the QCA concept make it possible to create not only nanoelectronic, but also nanomagnetic processors, the role of Lego cubes in which are played by nanomagnets with two fixed directions of magnetization and less than 100 nm in size. The logical nodes of such devices, made of permalloy, were demonstrated back in 2006 and were able to successfully combine the functions of both memory and the logical device.
In the future, the operation of such cellular automata will be ensured by the minimum switching powers and the use of an adiabatic reversible circuit, which will minimize the useless energy dissipation. Such automata can be assembled from specific molecules with single-bit cell elements inside them. Such solutions will allow to achieve a record density of elements within a single chip (up to 10¹² / cm²).

Illustration of a QCA cell formed on a molecular complex (University of Notre Dame, USA)
At the same time, despite the fact that chemical components are available for modeling such molecules, the molecules themselves can be created and their behavior carefully calculated, nobody has yet managed to implement something functional in a controlled way. It will require the development of fundamentally new ways to control the assembly processes of functional devices at the molecular level. And this question remains open so far.
Quantum computing on CMOS transistors - a figment of the imagination or a real prospect?
Is it possible to implement a quantum bit (qubit) - the fundamental cell of a quantum computing system based on a miniature analogue of a traditional CMOS transistor?
This is precisely the question posed by a joint group of scientists from the Cambridge laboratory of Hitachi (Great Britain) and specialists from Japan, France and Ukraine engaged in the European project TOLOP (TOwards LOw Power information and communication technologies).
Scientists were able to show that transistors manufactured in accordance with CMOS technology (CMOS) can be reduced to such a size that they can cope with the tasks solved by qubits. In other words, they will be able to accept one of the two quantum states or to remain in a state of quantum superposition.
“We wanted to show that the same technology that is used for our computers can be used for quantum computing experiments,” says Fernando Gonzales-Zalba, who led a group of scientists and research.
During the experiment, the scientists managed to change, record and read the quantum state of the CMOS qubit through the gate of the field effect transistor.
To obtain a “CMOS-qubit”, scientists created field-effect transistors, the gate of which forms two right angles around the channel and surrounds it on three sides. The channel itself, located horizontally on a silicon base, is an elongated nanoconductor, in the central part of which there is a gate structure that acts as a control electrode.

The maximum value of the electric field concentration, concentrating around the periphery of the transistor channel, is recorded on the edges of the conductor. Using the effect of quantum tunneling at temperatures below 20K, it is possible to separate one electron and move it between quantum dots. Features of the distribution of electrons on opposite faces of the nanoconductor will allow you to set the desired quantum state of the qubit transistor. Under certain conditions, electrons are able to move in both directions simultaneously, which will correspond to the state of quantum superposition.

The state of a quantum superposition of qubit transistors can be set by applying an electric pulse with certain characteristics to the gate. The duration of the qubit in this state, as experiments have shown, is an interval of 100 picoseconds.
The gate of the transistor-qubit can also be used to read quantum states in real time. For this purpose, scientists have connected a transistor with an oscillatory LC circuit operating at a frequency of 350 megahertz. In the intervals when the transistor was in a superposition state or in one of two “significant” states, the electric capacitance of the quantum dots on the faces of the nanoconductor was insignificant, but changed. This, in turn, led to a change in the resonant frequency of the circuit, which can be measured by traditional methods.
An experiment conducted in the laboratory by a joint team of specialists allowed us to bring the time during which the CMOS transistor-qubit is able to store quantum information up to 100 picoseconds. At the same time, scientists in the near future intend to increase this indicator to 1 nanosecond - time sufficient to perform basic operations used in the processing of quantum information.
Another question that experts were able to answer is the possibility of ensuring the state of quantum entanglement of two or more qubits among themselves. In the case of a CMOS transistor, the solution is possible when they are placed at a minimum distance from each other or on the same nanoconductor, which will ensure electrostatic coupling between the electrons in adjacent transistors.
Thus, “If you perform an operation with electrons in one transistor, then it will inevitably affect the quantum state of the second transistor, and vice versa,” the authors of the experimental model explain. “Two qubits interacting in this way form the basis for creating the set of elements needed to build a functional quantum computer.”
More information on the scientific concept and the results of the experiment can be found on the pages of ACS Nano Letters .
Summarizing, we can say that Cambridge experts were able to experimentally prove that the quantum effects of tunneling and consistency in a circuit using a CMOS transistor can be used for good if you try to look at them from the point of view of a quantum computing system. And if the experimentally obtained results can be fixed at the level of final practical solutions, then the solemn wires of silicon and related technologies for a legally well-deserved rest, perhaps again, can be postponed indefinitely.
Source 1
Source 2
Dear readers, we are always happy to meet and wait for you on the pages of our blog. We are ready to continue to share with you the latest news, review materials and other publications, and will try to do our best to make the time spent with us useful for you. And, of course, do not forget to subscribe to our columns . Our other articles and events
