The execution time of a program running on a
parallel system can be split into two parts: • a part that does
not benefit from the increasing number of processors (serial part); • a part that benefits from the increasing number of processors (parallel part).
Example – A computer program that processes files from disk. A part of that program may scan the directory of the disk and create a list of files internally in memory. After that, another part of the program passes each file to a separate
thread for processing. The part that scans the directory and creates the file list cannot be sped up on a parallel computer, but the part that processes the files can. Without loss of generality, let the total execution time on the
parallel system be T = 1. Denote the serial time as s and the parallel time as p, where s + p = 1. Denote the number of processors as N. Hypothetically, when running the program on a serial system (only one processor), the serial part still takes s, while the parallel part now takes Np. The execution time on the serial system is: T' = s + Np Using T' as the baseline, the speedup for the parallel system is: S = \frac{T'}{T} = \frac{s + Np}{s + p} = \frac{s + Np}{1} = s + Np By substituting p = 1 - s or s = 1 - p, several forms in the previous section can be derived. ==Uses==