Binomial tree algorithms Regarding parallel algorithms, there are two main models of parallel computation, the
parallel random access machine (PRAM) as an extension of the RAM with shared memory between processing units and the
bulk synchronous parallel computer which takes communication and
synchronization into account. Both models have different implications for the
time-complexity, therefore two algorithms will be shown.
PRAM-algorithm This algorithm represents a widely spread method to handle inputs where p is a power of two. The reverse procedure is often used for broadcasting elements. :
for k \gets 0
to \lceil\log_2 p\rceil - 1
do ::
for i \gets 0
to p - 1
do in parallel :::
if p_i
is active then ::::
if bit k
of i
is set then :::::
set p_i
to inactive ::::
else if i + 2^k ::::: x_i \gets x_i \oplus^{\star} x_{i+2^k} The binary operator for vectors is defined element-wise such that \begin{pmatrix} e_i^0 \\ \vdots \\ e_i^{m-1}\end{pmatrix} \oplus^\star \begin{pmatrix} e_j^0 \\ \vdots \\ e_j^{m-1}\end{pmatrix} = \begin{pmatrix} e_i^0 \oplus e_j^0 \\ \vdots \\ e_i^{m-1} \oplus e_j^{m-1} \end{pmatrix}. The algorithm further assumes that in the beginning x_i = v_i for all i and p is a power of two and uses the processing units p_0, p_1,\dots p_{n-1}. In every iteration, half of the processing units become inactive and do not contribute to further computations. The figure shows a visualization of the algorithm using addition as the operator. Vertical lines represent the processing units where the computation of the elements on that line take place. The eight input elements are located on the bottom and every animation step corresponds to one parallel step in the execution of the algorithm. An active processor p_i evaluates the given operator on the element x_i it is currently holding and x_j where j is the minimal index fulfilling j > i, so that p_j is becoming an inactive processor in the current step. x_i and x_j are not necessarily elements of the input set X as the fields are overwritten and reused for previously evaluated expressions. To coordinate the roles of the processing units in each step without causing additional communication between them, the fact that the processing units are indexed with numbers from 0 to p-1 is used. Each processor looks at its k-th least significant bit and decides whether to get inactive or compute the operator on its own element and the element with the index where the k-th bit is not set. The underlying communication pattern of the algorithm is a binomial tree, hence the name of the algorithm. Only p_0 holds the result in the end, therefore it is the root processor. For an
Allreduce operation the result has to be distributed, which can be done by appending a broadcast from p_0. Furthermore, the number p of processors is restricted to be a power of two. This can be lifted by padding the number of processors to the next power of two. There are also algorithms that are more tailored for this use-case.
Runtime analysis The main loop is executed \lceil\log_2 p\rceil times, the time needed for the part done in parallel is in \mathcal{O}(m) as a processing unit either combines two vectors or becomes inactive. Thus the parallel time T(p, m) for the PRAM is T(p, m) = \mathcal{O}(\log(p) \cdot m). The strategy for handling read and write conflicts can be chosen as restrictive as an exclusive read and exclusive write (EREW). The speedup S(p, m) of the algorithm is S(p, m) \in \mathcal{O}\left(\frac{T_\text{seq}}{T(p, m)}\right) = \mathcal{O}\left(\frac{p}{\log(p)}\right) and therefore the
efficiency is E(p, m) \in \mathcal{O}\left(\frac{S(p, m)}{p}\right) = \mathcal{O}\left(\frac{1}{\log(p)}\right). The efficiency suffers because half of the active processing units become inactive after each step, so \frac{p}{2^i} units are active in step i.
Distributed memory algorithm In contrast to the PRAM-algorithm, in the
distributed memory model, memory is not shared between processing units and data has to be exchanged explicitly between processing units. Therefore, data has to be exchanged explicitly between units, as can be seen in the following algorithm. :
for k \gets 0
to \lceil\log_2 p\rceil - 1
do ::
for i \gets 0
to p - 1
do in parallel :::
if p_i
is active then ::::
if bit k
of i
is set then :::::
send x_i
to p_{i-2^k} :::::
set p_k
to inactive ::::
else if i + 2^k :::::
receive x_{i+2^k} ::::: x_i \gets x_i \oplus^\star x_{i+2^k} The only difference between the
distributed algorithm and the PRAM version is the inclusion of explicit communication primitives, the operating principle stays the same.
Runtime analysis The communication between units leads to some overhead. A simple analysis for the algorithm uses the BSP-model and incorporates the time T_\text{start} needed to initiate communication and T_\text{byte} the time needed to send a byte. Then the resulting runtime is \Theta((T_\text{start} + n \cdot T_\text{byte})\cdot log(p)), as m elements of a vector are sent in each iteration and have size n in total.
Pipeline-algorithm For distributed memory models, it can make sense to use pipelined communication. This is especially the case when T_\text{start} is small in comparison to T_\text{byte}. Usually,
linear pipelines split data or a tasks into smaller pieces and process them in stages. In contrast to the binomial tree algorithms, the pipelined algorithm uses the fact that the vectors are not inseparable, but the operator can be evaluated for single elements: :
for k \gets 0
to p+m-3
do ::
for i \gets 0
to p - 1
do in parallel :::
if i \leq k ::::
send x_i^{k-i}
to p_{i+1} :::
if i-1 \leq k ::::
receive x_{i-1}^{k+i-1}
from p_{i-1} :::: x_{i}^{k+i-1} \gets x_{i}^{k+i-1} \oplus x_{i-1}^{k+i-1} It is important to note that the send and receive operations have to be executed concurrently for the algorithm to work. The result vector is stored at p_{p-1} at the end. The associated animation shows an execution of the algorithm on vectors of size four with five processing units. Two steps of the animation visualize one parallel execution step.
Runtime analysis The number of steps in the parallel execution are p + m -2, it takes p-1 steps until the last processing unit receives its first element and additional m-1 until all elements are received. Therefore, the runtime in the BSP-model is T(n, p, m) = \left(T_\text{start} + \frac{n}{m} \cdot T_\text{byte}\right)(p+m-2), assuming that n is the total byte-size of a vector. Although m has a fixed value, it is possible to logically group elements of a vector together and reduce m. For example, a problem instance with vectors of size four can be handled by splitting the vectors into the first two and last two elements, which are always transmitted and computed together. In this case, double the volume is sent each step, but the number of steps has roughly halved. It means that the parameter m is halved, while the total byte-size n stays the same. The runtime T(p) for this approach depends on the value of m, which can be optimized if T_\text{start} and T_\text{byte} are known. It is optimal for m = \sqrt{\frac{n \cdot (p-2)\cdot T_\text{byte}}{T_\text{start}}}, assuming that this results in a smaller m that divides the original one. == Applications ==