Scientists have made a major breakthrough in the development of large-scale quantum computers.
“Noise” remains the biggest problem for the development of quantum computers, and must be solved before they can be used widely and in the revolutionary ways that have been proposed. The new paper suggests a way of dealing with such noise, in turn potentially opening up a way to control that noise and develop much better quantum computing systems.
Quantum computers could potentially change the way we use technology, by allowing for the solving of problems that are impossible using today’s computers. But, to do so, they need weak enough noise as to be reliable.
The problem of noise remains central to creating working and useful quantum computers. In short, it is a result of the errors that are introduced as quantum scientists manipulate the “qubits” that power a quantum computer, and so that noise must be eliminated before any system can be reliably used.
The noise becomes more of a problem the more qubits there are, and the larger the system, meaning that the problem is a particular barrier for building the kinds of big quantum computers that have been offered as offering revolutionary new technology in the future.
To be able to do that, scientists need to be able to understand how noise functions across a quantum system. Until now, they have only bee able to do so using very small devices.
But new research, published in Nature Physics, includes new algorithms that are able to work in much larger-scale quantum computing devices.
And it has already been successfully used on the IBM Quantum Experience, an online platform that allows researchers to make use of the companies’ quantum computing systems.
They found that the algorithm was able to successfully diagnose the noise in the system – finding issues that had not previously been detected.
If quantum computers are to be successful, they will need to be precisely calibrated to avoid noise, or errors. But they will also need to be able to correct those errors if they are to be relied on for important calculations.
To be able to do that, quantum scientists will need to be able to know where the errors are likely to be introduced. Knowing that will allow them to optimise their error correction for the specific problems, rather than doing so in a generic way.
The new breakthrough algorithm allows scientists to better know how many of those errors there should be, and where they might arise, which could be included within future devices to allow them to better correct errors.
“This protocol opens myriad opportunities for novel diagnostic tools and practical applications,” the researchers write in the new paper, pointing out that it could be used in a variety of ways to make quantum computers better at handling the noise they generate.
“The results are the first implementation of provably rigorous and scalable diagnostic algorithms capable of being run on current quantum devices and beyond,” said Robin Harper, from the University of Sydney, who is lead author on the new paper.
‘Efficient learning of quantum noise’ is published today in Nature Physics.