# Characteristics of quantum computers

*The power of a quantum computer is measured in qubits, the basic unit of measurement in a quantum computer. Source .*

I make a facepalm after each reading of such a phrase. It didn’t bring to good, eyesight began to sit down; you will have to contact Meklon soon.

I think it’s time to systematize the basic parameters of a quantum computer somewhat. There are several of them:

- Number of qubits
- Coherence retention time (decoherence time)
- Error rate
- Processor architecture
- Price, availability, conditions, depreciation time, programming tools, etc.

### Number of qubits

Everything is obvious here, the more the better. In fact, you need to pay for qubits, and ideally you need to buy exactly as many qubits as you need to complete the task. For a developer of exclusive gaming machines, one qubit per machine is enough (for generating a random). For the “brute force” RSA-2048 - at least 2048 qubits.

The most common quantum algorithms are named after Grover and Shore. Grover allows you to “hack” hashes. To crash Bitcoin, you need computers with at least 256 qubits on board (you can shamanize with the complexity of Bitcoin, but let's dwell on this round figure). Shore allows you to factorize numbers. To factorize a number of length n binary digits, at least n qubits are needed.

Current maximum: 50 qubits ( already 72?) And in fact, 50 qubits is the limit. The limit of simulation of a quantum computer. In theory, we can simulate any number of qubits on classical calculators. In practice, adding one qubit to a simulation requires doubling the classic calculators. Add rumors about doubling qubits every year, and ask yourself the question: how to debug algorithms for 256 \ 512 \ 1024 \ 2048 qubits? There is no simulator, you can’t put a break point on a quantum processor.

### Coherence retention time (decoherence time)

Coherence and coherence are not the same thing. I prefer to compare coherence with memory regeneration. There are billions of cells on the RAM bar; each one has a charge, zero or one. This charge has a very interesting property - it flows down. Initially, a “single” cell becomes a cell at 0.99, then 0.98, and so on. Accordingly, 0.01, 0.02, 0.03 accumulate at zero ... It is necessary to update this charge, “regenerate”. Everything that is less than half is reset to zero, everything else reaches one.

Quantum processors cannot be regenerated. Accordingly, for all calculations there is one cycle, up to the first "leaked" qubit. The time before the first “leakage” is called decoherence time. Coherence is a state when qubits have not yet “leaked”. HereYou can see a bit more adult explanations.

Decoherence is related to the number of qubits: the more qubits, the more difficult it is to maintain coherence. On the other hand, in the presence of a large number of qubits, some of them can be used to correct errors associated with decoherence. It

*follows*that the number of qubits alone does not solve anything. You can double the number of qubits, and spend 90% of them on fixing decoherence.

Approximately here the concept of logical qubit arises. Roughly speaking, if you have a processor for 100 qubits, but 40 of them are aimed at fixing decoherence, you still have 60 logical qubits. Those on which you are running your algorithm. The concept of logical qubits is now rather theoretical, I have not heard about practical implementations personally.

### Errors and their correction

Another scourge of quantum processors. If you invert the qubit, with a probability of 2% the operation will end in error. If you confuse 2 qubits, the probability of error reaches 8%. Take a 256-bit number, cache it on SHA-256, count the number of operations, calculate the probability of ALL of these operations to be performed correctly.

Mathematicians provide a solution: error correction. There are algorithms. Implementing one entanglement of 2 logical qubits requires 100,000 physical qubits. Bitco-kapets will come soon.

### Processor architecture

Strictly speaking, there are no quantum computers. There are only quantum processors. Why do I need RAM when the time to work is limited to milliseconds? I program in Q #, but it is a high-level language. Allocated 15 qubits for yourself, and do what you want with them. Wanted, confused the first qubit with the tenth. Desired - confused the first six.

There is no such freedom on a real processor. I asked to confuse the first qubit with 15 - the compiler will generate 26 additional operations. If you're lucky. If you are not lucky, it will generate a hundred. The fact is that a qubit can only get confused with its neighbors. More than 6 neighbors per qubit, I have not seen. In principle, there are compilers that optimize quantum programs, but so far they are rather theoretical.

Each processor has its own set of instructions, and the relationships between qubits are different. In an ideal world, we have arbitrary Rx, Ry, Rz, and their combinations, plus a free entanglement of ten attributes, plus Swap: look at the operators in Quirk . In real life, we have several pairs of qubits, and the entanglement of CNOT (q [0], q [1]) costs only one operation, and CNOT (q [1], q [0]) - already at 7. And the coherence is melting ...

### Price, availability, conditions, depreciation time, programming tools ...

Prices are not advertised, accessibility to the average citizen is near zero, depreciation time has not been calculated in practice, programming tools are only emerging. Documentation at arxiv.org.

### So what kind of information do experts require when releasing a new quantum computer?

Besides the list above, I like the options from PerlPower and Alter2 :

That would be each article about a new quantum computer began with two characteristics - the number ofsimultaneouslyconfused qubits, and the retention time of qubits.

Or even better - since the runtime of the simplest benchmark, for example, finding simple factors of the number 91.