MarketGrover's algorithm
Company Profile

Grover's algorithm

In quantum computing, Grover's algorithm, also known as the quantum search algorithm, is a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just evaluations of the function, where is the size of the function's domain. It was devised by Lov Grover in 1996.

Applications and limitations
Grover's algorithm, along with variants like amplitude amplification, can be used to speed up a broad range of algorithms. In particular, algorithms for NP-complete problems which contain exhaustive search as a subroutine can be sped up by Grover's algorithm. These algorithms do not require that the input be given in the form of an oracle, since Grover's algorithm is being applied with an explicit function, e.g. the function checking that a set of bits satisfies a 3SAT instance. However, it is unclear whether Grover's algorithm could speed up best practical algorithms for these problems. Grover's algorithm can also give provable speedups for black-box problems in quantum query complexity, including element distinctness and the collision problem (solved with the Brassard–Høyer–Tapp algorithm). In these types of problems, one treats the oracle function f as a database, and the goal is to use the quantum query to this function as few times as possible. Cryptography Grover's algorithm essentially solves the task of function inversion. Roughly speaking, if we have a function y = f(x) that can be evaluated on a quantum computer, Grover's algorithm allows us to calculate x when given y. Consequently, Grover's algorithm gives broad asymptotic speed-ups to many kinds of brute-force attacks on symmetric-key cryptography, including collision attacks and pre-image attacks. However, this may not necessarily be the most efficient algorithm since, for example, the Pollard's rho algorithm is able to find a collision in SHA-2 more efficiently than Grover's algorithm. Limitations Grover's original paper described the algorithm as a database search algorithm, and this description is still common. The database in this analogy is a table of all of the function's outputs, indexed by the corresponding input. However, this database is not represented explicitly. Instead, an oracle is invoked to evaluate an item by its index. Reading a full database item by item and converting it into such a representation may take a lot longer than Grover's search. To account for such effects, Grover's algorithm can be viewed as solving an equation or satisfying a constraint. In such applications, the oracle is a way to check the constraint and is not related to the search algorithm. This separation usually prevents algorithmic optimizations, whereas conventional search algorithms often rely on such optimizations and avoid exhaustive search. Fortunately, fast Grover's oracle implementation is possible for many constraint satisfaction and optimization problems. The major barrier to instantiating a speedup from Grover's algorithm is that the quadratic speedup achieved is too modest to overcome the large overhead of near-term quantum computers. However, later generations of fault-tolerant quantum computers with better hardware performance may be able to realize these speedups for practical instances of data. == Problem description ==
Problem description
As input for Grover's algorithm, suppose we have a function f\colon \{0,1,\ldots,N-1\} \to \{0,1\}. In the "unstructured database" analogy, the domain represent indices to a database, and f(x)=1 if the data that x points to satisfies the search criterion. We additionally assume that only one index satisfies f(x)=1, and we call this index \omega. Our goal is to identify \omega. We can access f with a subroutine (sometimes called an oracle) in the form of a unitary operator U_\omega that acts as follows: \begin{cases} U_\omega |x\rang = -|x\rang & \text{for } x = \omega \text{, that is, } f(x) = 1, \\ U_\omega |x\rang = |x\rang & \text{for } x \ne \omega \text{, that is, } f(x) = 0. \end{cases} This uses the N-dimensional state space \mathcal{H}, which is supplied by a register with n = \lceil \log_{2} N \rceil qubits. This is often written as U_\omega|x\rang = (-1)^{f(x)}|x\rang. Grover's algorithm outputs \omega with probability at least 1/2 using O(\sqrt{N}) applications of U_\omega. This probability can be made arbitrarily large by running Grover's algorithm multiple times. If one runs Grover's algorithm until \omega is found, the expected number of applications is still O(\sqrt{N}), since it will only be run twice on average. Alternative oracle definition This section compares the above oracle U_\omega with an oracle U_f. U_\omega is different from the standard quantum oracle for a function f. This standard oracle, denoted here as U_f, uses an ancillary qubit system. The operation then represents an inversion (NOT gate) on the main system conditioned by the value of f(x) from the ancillary system: \begin{cases} U_f |x\rang |y\rang = |x\rang |\neg y\rang & \text{for } x = \omega \text{, that is, } f(x) = 1, \\ U_f |x\rang |y\rang = |x\rang |y\rang & \text{for } x \ne \omega \text{, that is, } f(x) = 0, \end{cases} or briefly, U_f |x\rang |y\rang = |x\rang |y \oplus f(x)\rang. These oracles are typically realized using uncomputation. If we are given U_f as our oracle, then we can also implement U_\omega, since U_\omega is U_f when the ancillary qubit is in the state |-\rang = \frac1{\sqrt2}\big(|0\rang - |1\rang\big) = H|1\rang: \begin{align} U_f \big( |x\rang \otimes |-\rang \big) &= \frac1{\sqrt2} \left( U_f |x\rang |0\rang - U_f |x\rang |1\rang \right)\\ &= \frac1{\sqrt2} \left(|x\rang |0 \oplus f(x)\rang - |x\rang |1 \oplus f(x)\rang \right)\\ &= \begin{cases} \frac1{\sqrt2} \left(-|x\rang |0\rang + |x\rang |1\rang\right) & \text{if } f(x) = 1, \\ \frac1{\sqrt2} \left( |x\rang |0\rang - |x\rang |1\rang \right) & \text{if } f(x) = 0 \end{cases} \\ &= (U_\omega |x\rang) \otimes |-\rang \end{align} So, Grover's algorithm can be run regardless of which oracle is given. If U_f is given, then we must maintain an additional qubit in the state |-\rang and apply U_f in place of U_\omega. == Algorithm ==
Algorithm
representation of Grover's algorithm The steps of Grover's algorithm are given as follows: • Initialize the system to the uniform superposition over all states|s\rangle = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} |x\rangle. • Perform the following "Grover iteration" r(N) times: • Apply the operator U_\omega • Apply the Grover diffusion operator U_s = 2 \left|s\right\rangle\!\! \left\langle s\right| - I • Measure the resulting quantum state in the computational basis. For the correctly chosen value of r, the output will be |\omega\rang with probability approaching 1 for N ≫ 1. Analysis shows that this eventual value for r(N) satisfies r(N) \leq \Big\lceil\frac{\pi}{4}\sqrt{N}\Big\rceil. Implementing the steps for this algorithm can be done using a number of gates linear in the number of qubits. Thus, the gate complexity of this algorithm is O(\log(N)r(N)), or O(\log(N)) per iteration. == Geometric proof ==
Geometric proof
There is a geometric interpretation of Grover's algorithm, following from the observation that the quantum state of Grover's algorithm stays in a two-dimensional subspace after each step. Consider the plane spanned by |s\rang and |\omega\rang; equivalently, the plane spanned by |\omega\rang and the perpendicular ket \textstyle |s'\rang = \frac{1}{\sqrt{N - 1}}\sum_{x \neq \omega} |x\rang. Grover's algorithm begins with the initial ket |s\rang, which lies in the subspace. The operator U_{\omega} is a reflection at the hyperplane orthogonal to |\omega\rang for vectors in the plane spanned by |s'\rang and |\omega\rang, i.e. it acts as a reflection across |s'\rang. This can be seen by writing U_\omega in the form of a Householder reflection: U_\omega = I - 2|\omega\rangle\langle \omega|. The operator U_s = 2 |s\rangle \langle s| - I is a reflection through |s\rang. Both operators U_s and U_{\omega} take states in the plane spanned by |s'\rang and |\omega\rang to states in the plane. Therefore, Grover's algorithm stays in this plane for the entire algorithm. It is straightforward to check that the operator U_s U_{\omega} of each Grover iteration step rotates the state vector by an angle of \theta = 2\arcsin\tfrac{1}{\sqrt{N}} . So, with enough iterations, one can rotate from the initial state |s\rang to the desired output state |\omega\rang. The initial ket is close to the state orthogonal to |\omega\rang: \lang s'|s\rang = \sqrt{\frac{N-1}{N}}. In geometric terms, the angle \theta/2 between |s\rang and |s'\rang is given by \sin \frac{\theta}{2} = \frac{1}{\sqrt{N}}. We need to stop when the state vector passes close to |\omega\rang; after this, subsequent iterations rotate the state vector away from |\omega\rang, reducing the probability of obtaining the correct answer. The exact probability of measuring the correct answer is \sin^2\left( \Big( r + \frac{1}{2} \Big)\theta\right), where r is the (integer) number of Grover iterations. The earliest time that we get a near-optimal measurement is therefore r \approx \pi \sqrt{N} / 4. == Algebraic proof ==
Algebraic proof
To complete the algebraic analysis, we need to find out what happens when we repeatedly apply U_s U_\omega. A natural way to do this is by eigenvalue analysis of a matrix. Notice that during the entire computation, the state of the algorithm is a linear combination of s and \omega. We can write the action of U_s and U_\omega in the space spanned by \{|s\rang, |\omega\rang\} as: \begin{align} U_s : a |\omega \rang + b |s \rang &\mapsto [|\omega \rang \, | s \rang] \begin{bmatrix} -1 & 0 \\ 2/\sqrt{N} & 1 \end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}. \\ U_\omega : a |\omega \rang + b |s \rang &\mapsto [|\omega \rang \, | s \rang] \begin{bmatrix} -1 & -2/\sqrt{N} \\ 0 & 1 \end{bmatrix}\begin{bmatrix}a\\b\end{bmatrix}. \end{align} So in the basis \{ |\omega\rang, |s\rang \} (which is neither orthogonal nor a basis of the whole space) the action U_sU_\omega of applying U_\omega followed by U_s is given by the matrix U_sU_\omega = \begin{bmatrix} -1 & 0 \\ 2/\sqrt{N} & 1 \end{bmatrix} \begin{bmatrix} -1 & -2/\sqrt{N} \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 2/\sqrt{N} \\ -2/\sqrt{N} & 1-4/N \end{bmatrix}. This matrix happens to have a very convenient Jordan form. If we define t = \arcsin(1/\sqrt{N}), it is U_sU_\omega = M \begin{bmatrix} e^{2it} & 0 \\ 0 & e^{-2it}\end{bmatrix} M^{-1} where M = \begin{bmatrix}-i & i \\ e^{it} & e^{-it} \end{bmatrix}. It follows that r-th power of the matrix (corresponding to r iterations) is (U_sU_\omega)^r = M \begin{bmatrix} e^{2rit} & 0 \\ 0 & e^{-2rit}\end{bmatrix} M^{-1}. Using this form, we can use trigonometric identities to compute the probability of observing ω after r iterations mentioned in the previous section, \left|\begin{bmatrix}\lang\omega|\omega\rang & \lang\omega|s\rang\end{bmatrix}(U_sU_\omega)^r \begin{bmatrix}0 \\ 1\end{bmatrix} \right|^2 = \sin^2\left( (2r+1)t\right). Alternatively, one might reasonably imagine that a near-optimal time to distinguish would be when the angles 2rt and −2rt are as far apart as possible, which corresponds to 2rt \approx \pi/2, or r = \pi/4t = \pi/4\arcsin(1/\sqrt{N}) \approx \pi\sqrt{N}/4. Then the system is in state [|\omega \rang \, | s \rang] (U_sU_\omega)^r \begin{bmatrix}0\\1\end{bmatrix} \approx [|\omega \rang \, | s \rang] M \begin{bmatrix} i & 0 \\ 0 & -i\end{bmatrix} M^{-1} \begin{bmatrix}0\\1\end{bmatrix} = | \omega \rang \frac{1}{\cos(t)} - |s \rang \frac{\sin(t)}{\cos(t)}. A short calculation now shows that the observation yields the correct answer ω with error O\left (\frac{1}{N} \right). == Extensions and variants ==
Extensions and variants
Multiple matching entries If, instead of 1 matching entry, there are k matching entries, the same algorithm works, but the number of iterations must be \frac{\pi}{4} \sqrt{\frac{N}{k}} instead of \frac{\pi}{4} \sqrt{N}. There are several ways to handle the case if k is unknown. A simple solution performs optimally up to a constant factor: run Grover's algorithm repeatedly for increasingly small values of k, e.g., taking k = N, N/2, N/4, ..., and so on, taking k = N/2^t for iteration t until a matching entry is found. With sufficiently high probability, a marked entry will be found by iteration t = \log_2(N/k) + c for some constant c. Thus, the total number of iterations taken is at most \frac{\pi}{4} \Big(1 + \sqrt{2} + \sqrt{4} + \cdots + \sqrt{\frac{N}{k2^c}}\Big) = O\big(\sqrt{N/k}\big). Another approach if k is unknown is to derive it via the quantum counting algorithm prior. If k = N/2 (or the traditional one marked state Grover's Algorithm if run with N = 2), the algorithm will provide no amplification. If k > N/2, increasing k will begin to increase the number of iterations necessary to obtain a solution. On the other hand, if k \geq N/2, a classical running of the checking oracle on a single random choice of input will more likely than not give a correct solution. A version of this algorithm is used in order to solve the collision problem. Quantum partial search A modification of Grover's algorithm called quantum partial search was described by Grover and Radhakrishnan in 2004. In partial search, one is not interested in finding the exact address of the target item, only the first few digits of the address. Equivalently, we can think of "chunking" the search space into blocks, and then asking "in which block is the target item?". In many applications, such a search yields enough information if the target address contains the information wanted. For instance, to use the example given by L. K. Grover, if one has a list of students organized by class rank, we may only be interested in whether a student is in the lower 25%, 25–50%, 50–75% or 75–100% percentile. To describe partial search, we consider a database separated into K blocks, each of size b = N/K. The partial search problem is easier. Consider the approach we would take classically – we pick one block at random, and then perform a normal search through the rest of the blocks (in set theory language, the complement). If we do not find the target, then we know it is in the block we did not search. The average number of iterations drops from N/2 to (N-b)/2. Grover's algorithm requires \frac{\pi}{4}\sqrt{N} iterations. Partial search will be faster by a numerical factor that depends on the number of blocks K. Partial search uses n_1 global iterations and n_2 local iterations. The global Grover operator is designated G_1 and the local Grover operator is designated G_2. The global Grover operator acts on the blocks. Essentially, it is given as follows: • Perform j_1 standard Grover iterations on the entire database. • Perform j_2 local Grover iterations. A local Grover iteration is a direct sum of Grover iterations over each block. • Perform one standard Grover iteration. The optimal values of j_1 and j_2 are discussed in the paper by Grover and Radhakrishnan. One might also wonder what happens if one applies successive partial searches at different levels of "resolution". This idea was studied in detail by Vladimir Korepin and Xu, who called it binary quantum search. They proved that it is not in fact any faster than performing a single partial search. ==Optimality ==
Optimality
Grover's algorithm is optimal up to sub-constant factors. That is, any algorithm that accesses the database only by using the operator must apply at least a 1-o(1) fraction as many times as Grover's algorithm. The extension of Grover's algorithm to k matching entries, (N/k)1/2/4, is also optimal. ==See also==
tickerdossier.comtickerdossier.substack.com