Least upper bound proved that for a set of periodic tasks with unique periods, a feasible schedule that will always meet deadlines exists if the
CPU utilization is below a specific bound (depending on the number of tasks). The schedulability test for RMS is: :U = \sum_{i=1}^{n} {U_i} = \sum_{i=1}^{n} \frac{C_i}{T_i} \leq n({2}^{1/n} - 1) where is the utilization factor, is the computation time for process , is the release period (with deadline one period later) for process , and is the number of processes to be scheduled. For example, for two processes. When the number of processes tends towards
infinity, this expression will tend towards: :\lim_{n \rightarrow \infty} n(\sqrt[n]{2} - 1) = \ln 2 \approx 0.693147\ldots Therefore, a rough estimate when {n} \geq {10} is that RMS can meet all of the deadlines if total CPU utilization, , is less than 70%. The other 30% of the CPU can be dedicated to lower-priority, non-real-time tasks. For smaller values of or in cases where is close to this estimate, the calculated utilization bound should be used. In practice, for the {i^{th}} process, {C_i} should represent the worst-case (i.e. longest) computation time and {T_i} should represent the worst-case deadline (i.e. shortest period) in which all processing must occur.
Relationship to queueing theory In
queueing theory, is called the
interarrival time, and is called the
service time. These two parameters are often specified as rates: :\lambda_i = { 1 \over T_i } is the
arrival rate, and :\mu_i = { 1 \over C_i } is the
service rate. The utilization for each task, denoted , is then: : \rho_i = { \lambda_i \over \mu_i } = { C_i \over T_i } = U_i as above.
Upper bound for harmonic task sets Liu and Layland noted that this bound may be relaxed to the maximum possible value of 1.0, if for tasks {T_m}, {T_i} where {T_m} {>} {T_i} and i = 1...m-1, {T_m} is an integer multiple of {T_i}, which is to say that all tasks have a period that is not just a multiple of the shortest period, {T_1}, but instead that any task's period is a multiple of all shorter periods. This is known as an
harmonic task set. An example of this would be: [{T_1},{T_2},{T_3},{T_4}] = [1, 3, 6, 12]. It is acknowledged by Liu and Layland that it is not always feasible to have a harmonic task set and that in practice other mitigation measures, such as buffering for tasks with soft-time deadlines or using a dynamic priority assignment approach may be used instead to allow for a higher bound.
Generalization to harmonic chains Kuo and Mok showed that for a task set made up of harmonic task subsets (known as
harmonic chains), the least upper bound test becomes: :U = \sum_{i=1}^{n} \frac{C_i}{T_i} \leq K({2}^{1/K} - 1) In the instance where for each task, its period is an exact multiple of every other task that has a shorter period, the task set can be thought of as being composed of harmonic task subsets of size 1 and therefore {K}{=}{n}, which makes this generalization equivalent to Liu and Layland's least upper bound. When {K}{=}{1}, the upper bound becomes 1.0, representing full utilization.
Stochastic bounds It has been shown that a randomly generated periodic task system will usually meet all deadlines when the utilization is 88% or less, however this fact depends on knowing the exact task statistics (periods, deadlines) which cannot be guaranteed for all task sets, and in some cases the authors found that the utilization reached the least upper bound presented by Liu and Layland.
Hyperbolic bound The hyperbolic bound is a tighter sufficient condition for schedulability than the one presented by Liu and Layland: :\prod_{i=1}^n (U_i +1) \leq 2, where is the CPU utilization for each task. It is the tightest upper bound that can be found using only the individual task utilization factors. == Resource sharing ==