Content area

Abstract

Could quantum machine learning someday run faster than classical machine learning? Over the past decade, the field of QML has produced many proposals for attaining large quantum speedups for computationally intensive tasks in machine learning and data analysis. However, it was unclear whether these speedups could be realized in end-to-end applications, as there had been no rigorous way to analyze such speedups. We remedy this issue by presenting a framework of classical computation to serve as an analogue to quantum linear algebra: in particular, we give a classical version of the quantum singular value transformation (QSVT) framework of Gilyén, Su, Low, and Wiebe. Within this framework, we observe that the space of QML algorithms splits into two classes: either input data is sparse or given in a quantum-accessible data structure, which implicitly requires such matrices to have low rank. The former class is BQP-complete, meaning that it must give exponential speedups; otherwise, exponential quantum speedups don't exist at all. On the other hand, the latter class can be “dequantized,” meaning that our classical framework produces algorithms to perform computations in this class at most polynomially slower than QSVT.

We give two forms of evidence for this claim. First, we prove that our framework has extensibility properties, showing that we can compute the same type of matrix arithmetic expressions that QSVT can compute. Second, with our framework, we dequantize eight QML algorithms appearing in the literature, including recommendation systems and low-rank semidefinite programming, which were previously believed to be among the best candidates for exponential quantum speedup. We can then conclude that these candidates do not give exponential speedups when run on classical data, radically limiting the space of settings where we could hope for exponential speedups from QML.

The classical algorithms presented here center around one key idea: data structures that support efficient quantum algorithms for preparing an input quantum state also admit efficient classical algorithms for measuring that quantum state in the computational basis. As observed in the classical sketching literature, these measurements, ℓ₂² importance samples, can be used to approximate a product of matrices by a product of “sketched” matrices of far lower dimension. This simple idea turns out to be extremely extensible when input matrices are sufficiently low-rank. Our work forms the beginning of a theory of quantum-inspired linear algebra, demonstrating that we can compute a large class of linear algebraic expressions in time independent of input dimension, provided that weak sampling assumptions on the input are satisfied.

Details

Title
Quantum Machine Learning Without Any Quantum
Author
Tang, Ewin
Publication year
2023
Publisher
ProQuest Dissertation & Theses
ISBN
9798380327909
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
2864097401
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.