Static case FKS Scheme The problem of optimal
static hashing was first solved in general by Fredman, Komlós and Szemerédi. In their 1984 paper, they detail a two-tiered hash table scheme in which each bucket of the (first-level) hash table corresponds to a separate second-level hash table. Keys are hashed twice—the first hash value maps to a certain bucket in the first-level hash table; the second hash value gives the position of that entry in that bucket's second-level hash table. The second-level table is guaranteed to be collision-free (i.e.
perfect hashing) upon construction. Consequently, the look-up cost is guaranteed to be
O(1) in the worst-case. In the static case, we are given a set with a total of entries, each one with a unique key, ahead of time. Fredman, Komlós and Szemerédi pick a first-level hash table with size s = 2(x-1) buckets. To construct, entries are separated into buckets by the top-level hashing function, where s = 2(x-1). Then for each bucket with entries, a second-level table is allocated with k^2 slots, and its
hash function is selected at random from a
universal hash function set so that it is collision-free (i.e. a
perfect hash function) and stored alongside the hash table. If the hash function randomly selected creates a table with collisions, a new hash function is randomly selected until a collision-free table can be guaranteed. Finally, with the collision-free hash, the entries are hashed into the second-level table. The quadratic size of the k^2 space ensures that randomly creating a table with collisions is infrequent and independent of the size of , providing linear amortized construction time. Although each second-level table requires quadratic space, if the keys inserted into the first-level hash table are
uniformly distributed, the structure as a whole occupies expected O(n) space, since bucket sizes are small with high
probability. The first-level hash function is specifically chosen so that, for the specific set of unique key values, the total space used by all the second-level hash tables has expected O(n) space, and more specifically T . Fredman, Komlós and Szemerédi showed that given a
universal hashing family of hash functions, at least half of those functions have that property.
Dynamic case Dietzfelbinger et al. present a dynamic dictionary algorithm that, when a set of n items is incrementally added to the dictionary, membership queries always run in constant time and therefore O(1) worst-case time, the total storage required is O(n) (linear), and O(1) expected amortized insertion and deletion time (
amortized constant time). In the dynamic case, when a key is inserted into the hash table, if its entry in its respective subtable is occupied, then a collision is said to occur and the subtable is rebuilt based on its new total entry count and randomly selected hash function. Because the
load factor of the second-level table is kept low 1/k, rebuilding is infrequent, and the
amortized expected cost of insertions is O(1). Similarly, the amortized expected cost of deletions is O(1). Additionally, the ultimate sizes of the top-level table or any of the subtables is unknowable in the dynamic case. One method for maintaining expected O(n) space of the table is to prompt a full reconstruction when a sufficient number of insertions and deletions have occurred. By results due to Dietzfelbinger et al., as long as the total number of insertions or deletions exceeds the number of elements at the time of last construction, the amortized expected cost of insertion and deletion remain O(1) with full rehashing taken into consideration. The implementation of dynamic perfect hashing by Dietzfelbinger et al. uses these concepts, as well as
lazy deletion, and is shown in
pseudocode below. ==Pseudocode implementation==