Although stochastic computing was a historical failure, it may still remain relevant for solving certain problems. To understand when it remains relevant, it is useful to compare stochastic computing with more traditional methods of digital computing.
Strengths Suppose we wish to multiply two numbers each with n bits of precision. Using the typical
long multiplication method, we need to perform n^2 operations. With stochastic computing, we can AND together any number of bits and the expected value will always be correct. (However, with a small number of samples the variance will render the actual result highly inaccurate). Moreover, the underlying operations in a digital multiplier are
full adders, whereas a stochastic computer requires only an
AND gate. Additionally, a digital multiplier would naively require 2n input wires, whereas a stochastic multiplier would require only two input wires. (If the digital multiplier serialized its output, however, it would also require only two input wires.) Additionally, stochastic computing is robust against noise; if a few bits in a stream are flipped, those errors will have no significant impact on the solution. Furthermore, stochastic computing elements can tolerate skew in the arrival time of the inputs. Circuits work properly even when the inputs are misaligned temporally. As a result, stochastic systems can be designed to work with inexpensive locally generated clocks instead of using a global clock and an expensive clock distribution network. Finally, stochastic computing provides an estimate of the solution that grows more accurate as we extend the bit stream. In particular, it provides a rough estimate very rapidly. This property is usually referred to as
progressive precision, which suggests that the precision of stochastic numbers (bit streams) increases as computation proceeds. It is as if the
most significant bits of the number arrive before its
least significant bits; unlike the conventional
arithmetic circuits where the most significant bits usually arrive last. In some iterative systems the partial solutions obtained through progressive precision can provide faster feedback than through traditional computing methods, leading to faster convergence.
Weaknesses Stochastic computing is, by its very nature, random. When we examine a random bit stream and try to reconstruct the underlying value, the effective precision can be measured by the variance of our sample. In the example above, the digital multiplier computes a number to 2n bits of accuracy, so the precision is 2^{-2n}. If we are using a random bit stream to estimate a number and want the standard deviation of our estimate of the solution to be at least 2^{-2n}, we would need O(2^{4n}) samples. This represents an exponential increase in work. In certain applications, however, the progressive precision property of stochastic computing can be exploited to compensate this exponential loss. Second, stochastic computing requires a method of generating random biased bit streams. In practice, these streams are generated with
pseudo-random number generators. Unfortunately, generating (pseudo-)random bits is fairly costly (compared to the expense of, e.g., a full adder). Therefore, the gate-level advantage of stochastic computing is typically lost. Third, the analysis of stochastic computing assumes that the bit streams are independent (uncorrelated). If this assumption does not hold, stochastic computing can fail dramatically. For instance, if we try to compute p^2 by multiplying a bit stream for p by itself, the process fails: since a_i \land a_i=a_i, the stochastic computation would yield p \times p = p , which is not generally true (unless p=0 or 1). In systems with feedback, the problem of
decorrelation can manifest in more complicated ways. Systems of stochastic processors are prone to
latching, where feedback between different components can achieve a deadlocked state. A great deal of effort must be spent decorrelating the system to attempt to remediate latching. Fourth, although some digital functions have very simple stochastic counterparts (such as the translation between multiplication and the AND gate), many do not. Trying to express these functions stochastically may cause various pathologies. For instance, stochastic decoding requires the computation of the function f(p,q)\rightarrow pq/(pq + (1-p)(1-q)). There is no single bit operation that can compute this function; the usual solution involves producing correlated output bits, which, as we have seen above, can cause a host of problems. Other functions (such as the averaging operator f(p,q)\rightarrow (p+q)/2 require either stream decimation or inflation. Tradeoffs between precision and memory can be challenging. == Stochastic decoding ==