Searching Tables Yao This upper bound, similarly to the lower bound described above, also requires m to be sufficiently large, as a function of n . Remarkably, this upper bound uses only one additional table entry than the setting for which the lower bound applies.
Dynamic Partial Sums The dynamic partial sum problem defines two operations which sets the value in an array at index to be , and which returns the sum of the values in at indices through . A naïve implementation would take O(1) time for and O(n) time for . Instead, values can be stored as leaves in a tree whose inner nodes store the sum over the subtree rooted at that node. In this structure requires O(\log n) time to update each node in the leaf to root path, and similarly requires O(\log n) time to traverse the tree from leaf to root summing the values of all subtrees left of the query index. Improving on a result of Fredman and Saks,
Mihai Pătraşcu used the cell-probe model and an information transfer argument to show that the partial sums problem requires \Omega\left(\log n\right) time per operation in the worst case (i.e., the worst of query and update must consume such time), assuming b=\Omega(\log n) bits per word. Chakrabarti and Regev proved that the approximate nearest neighbor search problem on the
Hamming cube using polynomial storage and d^{O(1)} word size requires a worst-case query time of \Omega\left(\frac{\log\log d}{\log\log\log d}\right). This proof used the cell-probe model and information theoretic techniques from
communication complexity. == The Cell-Probe Model versus Random Access Machines ==