A search algorithm that uses hashing consists of two parts. The first part is computing a
hash function which transforms the search key into an
array index. The ideal case is such that no two search keys hash to the same array index. However, this is not always the case and impossible to guarantee for unseen given data.
Separate chaining In separate chaining, the process involves building a
linked list with
key–value pair for each search array index. The collided items are chained together through a single linked list, which can be traversed to access the item with a unique search key. Collision resolution through chaining with linked list is a common method of implementation of hash tables. Let T and x be the hash table and the node respectively, the operation involves as follows: Chained-Hash-Insert(
T,
k)
insert x at the head of linked list T[
h(
k)] Chained-Hash-Search(
T,
k)
search for an element with key k in linked list T[
h(
k)] Chained-Hash-Delete(
T,
k)
delete x from the linked list T[
h(
k)] If the element is comparable either
numerically or
lexically, and inserted into the list by maintaining the
total order, it results in faster termination of the unsuccessful searches.
Other data structures for separate chaining If the keys are
ordered, it could be efficient to use "
self-organizing" concepts such as using a
self-balancing binary search tree, through which the
theoretical worst case could be brought down to O(\log{n}), although it introduces additional complexities. In
dynamic perfect hashing, two-level hash tables are used to reduce the look-up complexity to be a guaranteed O(1) in the worst case. In this technique, the buckets of k entries are organized as
perfect hash tables with k^2 slots providing constant worst-case lookup time, and low amortized time for insertion. A study shows array-based separate chaining to be 97% more performant when compared to the standard linked list method under heavy load. Techniques such as using
fusion tree for each buckets also result in constant time for all operations with high probability.
Caching and locality of reference The linked list of separate chaining implementation may not be
cache-conscious due to
spatial locality—
locality of reference—when the nodes of the linked list are scattered across memory, thus the list traversal during insert and search may entail
CPU cache inefficiencies. In
cache-conscious variants of collision resolution through separate chaining, a
dynamic array found to be more
cache-friendly is used in the place where a linked list or self-balancing binary search trees is usually deployed, since the
contiguous allocation pattern of the array could be exploited by
hardware-cache prefetchers—such as
translation lookaside buffer—resulting in reduced access time and memory consumption.
Open addressing Open addressing is another collision resolution technique in which every entry record is stored in the bucket array itself, and the hash resolution is performed through
probing. When a new entry has to be inserted, the buckets are examined, starting with the hashed-to slot and proceeding in some
probe sequence, until an unoccupied slot is found. When searching for an entry, the buckets are scanned in the same sequence, until either the target record is found, or an unused array slot is found, which indicates an unsuccessful search. Well-known probe sequences include: •
Linear probing, in which the interval between probes is fixed (usually 1). •
Quadratic probing, in which the interval between probes is increased by adding the successive outputs of a quadratic polynomial to the value given by the original hash computation. •
Double hashing, in which the interval between probes is computed by a secondary hash function. The performance of open addressing may be slower compared to separate chaining since the probe sequence increases when the load factor \alpha approaches 1. The algorithm is ideally suited for
fixed memory allocation. The collision in coalesced hashing is resolved by identifying the largest-indexed empty slot on the hash table, then the colliding value is inserted into that slot. The bucket is also linked to the inserted node's slot which contains its colliding hash address.
Cuckoo hashing Cuckoo hashing is a form of open addressing collision resolution technique which guarantees O(1) worst-case lookup complexity and constant amortized time for insertions. The collision is resolved through maintaining two hash tables, each having its own hashing function, and collided slot gets replaced with the given item, and the preoccupied element of the slot gets displaced into the other hash table. The process continues until every key has its own spot in the empty buckets of the tables; if the procedure enters into
infinite loop—which is identified through maintaining a threshold loop counter—both hash tables get rehashed with newer hash functions and the procedure continues.
Hopscotch hashing Hopscotch hashing is an open addressing based algorithm which combines the elements of
cuckoo hashing,
linear probing and chaining through the notion of a
neighbourhood of buckets—the subsequent buckets around any given occupied bucket, also called a "virtual" bucket. The algorithm is designed to deliver better performance when the load factor of the hash table grows beyond 90%; it also provides high throughput in
concurrent settings, thus well suited for implementing resizable
concurrent hash table. The neighbourhood characteristic of hopscotch hashing guarantees a property that, the cost of finding the desired item from any given buckets within the neighbourhood is very close to the cost of finding it in the bucket itself; the algorithm attempts to be an item into its neighbourhood—with a possible cost involved in displacing other items. Each bucket within the hash table includes an additional "hop-information"—an
H-bit
bit array for indicating the
relative distance of the item which was originally hashed into the current virtual bucket within
H − 1 entries. Let k and Bk be the key to be inserted and bucket to which the key is hashed into respectively; several cases are involved in the insertion procedure such that the neighbourhood property of the algorithm is vowed: if Bk is empty, the element is inserted, and the leftmost bit of bitmap is
set to 1; if not empty, linear probing is used for finding an empty slot in the table, the bitmap of the bucket gets updated followed by the insertion; if the empty slot is not within the range of the
neighbourhood, i.e.
H − 1, subsequent swap and hop-info bit array manipulation of each bucket is performed in accordance with its neighbourhood
invariant properties.
Robin Hood hashing Robin Hood hashing is an open addressing based collision resolution algorithm; the collisions are resolved through favouring the displacement of the element that is farthest—or longest
probe sequence length (PSL)—from its "home location" i.e. the bucket to which the item was hashed into. It is named after
Robin Hood, a mythical
heroic outlaw who stole from the rich to give to the poor. Although Robin Hood hashing does not change the
theoretical search cost, it significantly affects the
variance of the
distribution of the items on the buckets, i.e. dealing with
long run formation in the hash table. Each node within the hash table that uses Robin Hood hashing should be augmented to store an extra PSL value. Let x be the key to be inserted, x{.}\text{psl} be the (incremental) PSL length of x, T be the hash table and j be the index, the insertion procedure is as follows: • If x{.}\text{psl}\ \le\ T[j]{.}\text{psl}: the iteration goes into the next bucket without attempting an external probe. • If x{.}\text{psl}\ >\ T[j]{.}\text{psl}: insert the item x into the bucket j; swap x with T[j]—let it be x'; continue the probe from the (j+1)th bucket to insert x'; repeat the procedure until every element is inserted. ==Dynamic resizing==