Peter Drummond, Margaret Reid, Alex Dellios, Bogdan Opanchuk, Simon Kiesewetter, and Run-Yan Teh
Thursday 27 April 1pm AEST
Click here to watch the recording on the AIP YouTube channel.
Abstract: There are experimental claims of computational advantage on quantum computers. This raises theoretical questions of validation for the random-number generation tasks that are solved. How does one verify the output? Are the answers obtained even correct, and how can one test this in practise?
Brute-force computational verification is not possible. No classical computer is large or fast enough for this, without taking billions of years. Even computing the distributions is exponentially hard, not just from time and memory, but also due to precision constraints, as there is insufficient precision.
For Gaussian boson sampling tasks, we show that simulations in quantum phase-space can solve this, by generating any diagnostic that is measurable. This uses an FFT binning algorithm to obtain computable statistics, with up to 16,000 qubits in large test cases, far larger than in any experiment.
The result is that recent experimental data from China and USA is significantly different from theory, with over 100 standard deviations of discrepancy for some measured output statistics. Possible explanations are explored, but this is a nontrivial physics problem, and we do not have a complete explanation.
This does not disprove the computational advantage claims. These are very hard tasks, and we do not directly generate the required numbers. However, the outputs do not survive the chi-squared tests one would normally use to test validity of random numbers, as used in numerous cryptography applications.
Finally, we point out how similar techniques may in future be useful in testing other quantum network and computer designs. The principle is to use scalable methods that generate probabilities, rather than trying to use naive algorithms on classical machines, which are now totally impractical.