Quantum-enhanced machine learning refers to
quantum algorithms that solve tasks in machine learning, thereby improving and often expediting classical machine learning techniques. Such algorithms typically require one to encode the given classical data set into a quantum computer to make it accessible for quantum information processing. Subsequently, quantum information processing routines are applied and the result of the quantum computation is read out by measuring the quantum system. For example, the outcome of the measurement of a qubit reveals the result of a binary classification task. While many proposals of QML algorithms are still purely theoretical and require a full-scale universal
quantum computer to be tested, others have been implemented on small-scale or special purpose quantum devices.
Quantum associative memories and quantum pattern recognition Early work on quantum associative memories has been done by Dan Ventura and Tony Martinez and by Carlo A. Trugenberger in the late 1990s and early 2000s. (in their simplest realization) store patterns in a
unitary matrix U acting on the
Hilbert space of n qubits. Retrieval is realized by the
unitary evolution of a fixed initial state to a
quantum superposition of the desired patterns with probability distribution peaked on the most similar pattern to an input. By its very quantum nature, the retrieval process is thus probabilistic. Because quantum associative memories are free from cross-talk, however, spurious memories are never generated. Correspondingly, they have a superior capacity than classical ones. The number of parameters in the unitary matrix U is O(pn). One can thus have efficient, spurious-memory-free quantum associative memories for any polynomial number of patterns. If the matrix U is encoded as a unique operator (as opposed as to a sequence of gates as in the circuit model), e.g. by an optical interferometer, the retrieval becomes efficient even for an exponential number of patterns.
Linear algebra simulation with quantum amplitudes A number of quantum algorithms for machine learning are based on the idea of amplitude encoding, that is, to associate the
amplitudes of a quantum state with the inputs and outputs of computations. Since a state of n qubits is described by 2^n complex amplitudes, this information encoding can allow for an exponentially compact representation. Intuitively, this corresponds to associating a discrete probability distribution over binary random variables with a classical vector. The goal of algorithms based on amplitude encoding is to formulate quantum algorithms whose
resources grow polynomially in the number of qubits n, which amounts to a logarithmic
time complexity in the number of amplitudes and thereby the dimension of the input. Many QML algorithms in this category are based on variations of the
quantum algorithm for linear systems of equations (colloquially called HHL, after the paper's authors) which, under specific conditions, performs a matrix inversion using an amount of physical resources growing only logarithmically in the dimensions of the matrix. One of these conditions is that a
Hamiltonian which entry-wise corresponds to the matrix can be simulated efficiently, which is known to be possible if the matrix is sparse or low rank. For reference, any known classical algorithm for
matrix inversion requires a number of operations that grows
more than quadratically in the dimension of the matrix (e.g. O\mathord\left(n^{2.373}\right)), but they are not restricted to sparse matrices. Quantum matrix inversion can be applied to machine learning methods in which the training reduces to solving a
linear system of equations, for example in least-squares linear regression, A crucial bottleneck of methods that simulate linear algebra computations with the amplitudes of quantum states is state preparation, which often requires one to initialise a quantum system in a state whose amplitudes reflect the features of the entire dataset. Although efficient methods for state preparation are known for specific cases, this step easily hides the complexity of the task.
Variational quantum algorithms (VQAs) In a variational quantum algorithm, a classical computer optimizes the parameters used to prepare a quantum state, while a quantum computer is used to do the actual state preparation and measurement. VQAs are considered promising candidates for
noisy intermediate-scale quantum computers. Variational quantum circuits (or parameterized quantum circuits) are a popular class of VQAs where the parameters are those used in a fixed
quantum circuit. Researchers have studied VQCs to solve optimization problems and find the ground state energy of complex quantum systems, which were difficult to solve using a classical computer.
Quantum binary classifier Pattern reorganization is one of the important tasks of machine learning,
binary classification is one of the tools or algorithms to find patterns. Binary classification is used in
supervised learning and in
unsupervised learning. In QML, classical bits are converted to qubits and they are mapped to Hilbert space; complex value data are used in a quantum binary classifier to use the advantage of Hilbert space. By exploiting the quantum mechanic properties such as superposition, entanglement, interference the quantum binary classifier produces the accurate result in short period of time.
Quantum machine learning algorithms based on Grover search Another approach to improving classical machine learning with quantum information processing uses
amplitude amplification methods based on
Grover's search algorithm, which has been shown to solve unstructured search problems with a quadratic speedup compared to classical algorithms. These quantum routines can be employed for learning algorithms that translate into an unstructured search task, as can be done, for instance, in the case of the
k-medians and the
k-nearest neighbors algorithms. An example of amplitude amplification being used in a machine learning algorithm is Grover's search algorithm minimization. In which a subroutine uses Grover's search algorithm to find an element less than some previously defined element. This can be done with an oracle that determines whether or not a state with a corresponding element is less than the predefined one. Grover's algorithm can then find an element such that our condition is met. The minimization is initialized by some random element in our data set, and iteratively does this subroutine to find the minimum element in the data set. This minimization is notably used in quantum k-medians, and it has a speedup of at least \mathcal O\left(\sqrt{\frac nk}\right) compared to classical versions of k-medians, where n is the number of data points and k is the number of clusters. as well as the performance of reinforcement learning agents in the projective simulation framework.
Quantum-enhanced reinforcement learning In quantum-enhanced
reinforcement learning, a quantum agent interacts with a classical or quantum environment and occasionally receives rewards for its actions. In some situations, either because of the quantum processing capability of the agent, and
superconducting circuits. A quantum speedup of the agent's internal decision-making time while a quantum speedup of the learning time in a fully coherent (`quantum') interaction between agent and environment has been experimentally realized in a photonic setup.
Quantum annealing Quantum annealing is an optimization technique used to determine the local minima and maxima of a function over a given set of candidate functions. This is a method of discretizing a function with many local minima or maxima in order to determine the observables of the function. The process can be distinguished from
Simulated annealing by the
Quantum tunneling process, by which particles tunnel through kinetic or potential barriers from a high state to a low state. Quantum annealing starts from a superposition of all possible states of a system, weighted equally. Then the time-dependent
Schrödinger equation guides the time evolution of the system, serving to affect the amplitude of each state as time increases. Eventually, the ground state can be reached to yield the instantaneous Hamiltonian of the system.
Quantum sampling techniques Sampling from high-dimensional probability distributions is at the core of a wide spectrum of computational techniques with important applications across science, engineering, and society. Examples include
deep learning,
probabilistic programming, and other machine learning and artificial intelligence applications. A computationally hard problem, which is key for some relevant machine learning tasks, is the estimation of averages over probabilistic models defined in terms of a
Boltzmann distribution. Sampling from generic probabilistic models is hard: algorithms relying heavily on sampling are expected to remain intractable no matter how large and powerful classical computing resources become. Even though quantum annealers, like those produced by D-Wave Systems, were designed for challenging combinatorial optimization problems, it has been recently recognized as a potential candidate to speed up computations that rely on sampling by exploiting quantum effects. Some research groups have recently explored the use of quantum annealing hardware for training
Boltzmann machines and
deep neural networks. The standard approach to training Boltzmann machines relies on the computation of certain averages that can be estimated by standard
sampling techniques, such as
Markov chain Monte Carlo algorithms. Another possibility is to rely on a physical process, like quantum annealing, that naturally generates samples from a Boltzmann distribution. The objective is to find the optimal control parameters that best represent the empirical distribution of a given dataset. The D-Wave 2X system hosted at NASA Ames Research Center has been recently used for the learning of a special class of restricted Boltzmann machines that can serve as a building block for deep learning architectures. Inspired by the success of Boltzmann machines based on classical Boltzmann distribution, a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Hamiltonian was recently proposed. Due to the non-commutative nature of quantum mechanics, the training process of the quantum Boltzmann machine can become nontrivial. This problem was, to some extent, circumvented by introducing bounds on the quantum probabilities, allowing the authors to train the model efficiently by sampling. It is possible that a specific type of quantum Boltzmann machine has been trained in the D-Wave 2X by using a learning rule analogous to that of classical Boltzmann machines. Quantum annealing is not the only technology for sampling. In a prepare-and-measure scenario, a universal quantum computer prepares a thermal state, which is then sampled by measurements. This can reduce the time required to train a deep restricted Boltzmann machine, and provide a richer and more comprehensive framework for deep learning than classical computing. The same quantum methods also permit efficient training of full Boltzmann machines and multi-layer, fully connected models and do not have well-known classical counterparts. Relying on an efficient thermal state preparation protocol starting from an arbitrary state, quantum-enhanced
Markov logic networks exploit the symmetries and the locality structure of the
probabilistic graphical model generated by a
first-order logic template. is QCNN. It was inspired by the advantages of CNNs and the power of QML. It is made using a combination of a variational quantum circuit (VQC) and a
deep neural network(DNN), fully utilizing the power of extremely parallel processing on a superposition of a quantum state with a finite number of qubits. The main strategy is to carry out an iterative optimization process in the
NISQ devices, without the negative impact of noise, which is possibly incorporated into the circuit parameter, and without the need for quantum error correction. The quantum circuit must effectively handle spatial information in order for QCNN to function as CNN. The convolution filter is the most basic technique for making use of spatial information. One or more quantum convolutional filters make up a quantum convolutional neural network (QCNN), and each of these filters transforms input data using a quantum circuit that can be created in an organized or randomized way. Three parts that make up the quantum convolutional filter are: the encoder, the parameterized quantum circuit (PQC), and the measurement. The quantum convolutional filter can be seen as an extension of the filter in the traditional CNN because it was designed with trainable parameters. Quantum neural networks take advantage of the hierarchical structures, and for each subsequent layer, the number of qubits from the preceding layer is decreased by a factor of two. For n input qubits, these structures have O(log(n)) layers, allowing for shallow circuit depth. Additionally, they are able to avoid "barren plateau," one of the most significant issues with PQC-based algorithms, ensuring trainability. Despite the fact that the QCNN model does not include the corresponding quantum operation, the fundamental idea of the
pooling layer is also offered to assure validity. In QCNN architecture, the pooling layer is typically placed between succeeding convolutional layers. Its function is to shrink the representation's spatial size while preserving crucial features, which allows it to reduce the number of parameters, streamline network computing, and manage over-fitting. Such a process can be accomplished applying
full Tomography on the state to reduce it all the way down to one qubit and then processed it. The most frequently used unit type in the
pooling layer is max pooling, although there are other types as well. Similar to
conventional feed-forward neural networks, the last module is a fully connected layer with full connections to all activations in the preceding layer. Translational invariance, which requires identical blocks of parameterized quantum gates within a layer, is a distinctive feature of the QCNN architecture.
Fully quantum machine learning In the most general case of QML, both the learning device and the system under study, as well as their interaction, are fully quantum. This section gives a few examples of results on this topic. One class of problem that can benefit from the fully quantum approach is that of 'learning' unknown quantum states, processes or measurements, in the sense that one can subsequently reproduce them on another quantum system. For example, one may wish to learn a measurement that discriminates between two coherent states, given not a classical description of the states to be discriminated, but instead a set of example quantum systems prepared in these states. The naive approach would be to first extract a classical description of the states and then implement an ideal discriminating measurement based on this information. This would only require classical learning. However, one can show that a fully quantum approach is strictly superior in this case. (This also relates to work on quantum pattern matching.) The problem of learning unitary transformations can be approached in a similar way. Going beyond the specific problem of learning states and transformations, the task of
clustering also admits a fully quantum version, wherein both the oracle which returns the distance between data-points and the information processing device which runs the algorithm are quantum. Finally, a general framework spanning supervised, unsupervised and reinforcement learning in the fully quantum setting was introduced in, in analogy to
XAI/XML). These efforts are often also referred to as Interpretable Machine Learning (IML, and by extension IQML). XQML/IQML can be considered as an alternative research direction instead of finding a quantum advantage. For example, XQML has been used in the context of mobile malware detection and classification. Quantum
Shapley values have also been proposed to interpret gates within a circuit based on a game-theoretic approach. has also been proposed, known as Q-LIME.
Quantum kernel methods and generative models Quantum kernel methods have emerged as particularly promising approaches for near-term applications. Large-scale benchmarking studies encompassing over 20,000
trained models have provided comprehensive insights into the effectiveness of fidelity quantum kernels (FQKs) and projected quantum kernels (PQKs) across diverse classification and regression tasks. These studies have revealed universal patterns that guide effective quantum kernel method design. In the
generative modeling domain, quantum
generative adversarial networks and quantum circuit Born machines have shown particular promise for tabular data synthesis. Novel quantum generative models for tabular data have demonstrated performance improvements of 8.5% over leading classical models while using only 0.072% of the parameters, indicating significant potential for parameter-efficient learning. == Classical learning applied to quantum problems ==