SoCs must optimize
power use, area on
die, communication, positioning for
locality between modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use a
multi-chip module architecture without accounting for the area use, power consumption or performance of the system to the same extent. Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hard
combinatorial optimization problem, and can indeed be
NP-hard fairly easily. Therefore, sophisticated
optimization algorithms are often required and it may be practical to use
approximation algorithms or
heuristics in some cases. Additionally, most SoC designs contain
multiple variables to optimize simultaneously, so
Pareto efficient solutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducing
trade-offs in system design. For broader coverage of trade-offs and
requirements analysis, see
requirements engineering.
Targets Power consumption SoCs are optimized to minimize the
electrical power used to perform the SoC's functions. Most SoCs must use low power. SoC systems often require long
battery life (such as
smartphones), can potentially spend months or years without a power source while needing to maintain autonomous function, and often are limited in power use by a high number of
embedded SoCs being
networked together in an area. Additionally, energy costs can be high and conserving energy will reduce the
total cost of ownership of the SoC. Finally,
waste heat from high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is the
integral of
power consumed with respect to time, and the
average rate of power consumption is the product of
current by
voltage. Equivalently, by
Ohm's law, power is current squared times resistance or voltage squared divided by
resistance: P = IV = \frac{V^2}{R} = {I^2}{R}SoCs are frequently embedded in
portable devices such as
smartphones,
GPS navigation devices, digital
watches (including
smartwatches) and
netbooks. Customers want long battery lives for
mobile computing devices, another reason that power consumption must be minimized in SoCs.
Multimedia applications are often executed on these devices, including video games,
video streaming,
image processing; all of which have grown in
computational complexity in recent years with user demands and expectations for higher-
quality multimedia. Computation is more demanding as expectations move towards
3D video at
high resolution with
multiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery. The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erode
reliability of the circuit over time. High temperatures and thermal stress negatively impact reliability,
stress migration, decreased
mean time between failures,
electromigration,
wire bonding,
metastability and other performance degradation of the SoC over time. In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of high
transistor counts on modern devices, oftentimes a layout of sufficient throughput and high
transistor density is physically realizable from
fabrication processes but would result in unacceptably high amounts of heat in the circuit's volume. These thermal effects force SoC and other chip designers to apply conservative
design margins, creating less performant devices to mitigate the risk of
catastrophic failure. Due to increased
transistor densities as length scales get smaller, each
process generation produces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneous
heat fluxes, which cannot be effectively mitigated by uniform
passive cooling.
Throughput SoCs are optimized to maximize computational and communications
throughput.
Latency SoCs are optimized to minimize
latency for some or all of their functions. This can be accomplished by
laying out elements with proper proximity and
locality to each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules,
functional units and memories. In general, optimizing to minimize latency is an
NP-complete problem equivalent to the
Boolean satisfiability problem. For
tasks running on processor cores, latency and throughput can be improved with
task scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints.
Methodologies Systems on chip are modeled with standard hardware
verification and validation techniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect to
multiple-criteria decision analysis on the above optimization targets.
Task scheduling Task scheduling is an important activity in any computer system with multiple
processes or
threads sharing a single processor core. It is important to reduce and increase for
embedded software running on an SoC's . Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involving
shared resources. Software running on SoCs often schedules tasks according to
network scheduling and
randomized scheduling algorithms.
Pipelining Hardware and software tasks are often pipelined in
processor design. Pipelining is an important principle for
speedup in
computer architecture. They are frequently used in
CPUs (for example, the
classic RISC pipeline) and
GPUs (
graphics pipeline), but are also applied to application-specific tasks such as
digital signal processing and multimedia manipulations in the context of SoCs.
Probabilistic modeling SoCs are often analyzed though
probabilistic models,
queueing networks, and
Markov chains. For instance,
Little's law allows SoC states and NoC buffers to be modeled as arrival processes and analyzed through
Poisson random variables and
Poisson processes.
Markov chains SoCs are often modeled with
Markov chains, both
discrete time and
continuous time variants. Markov chain modeling allows
asymptotic analysis of the SoC's
steady state distribution of power, heat, latency and other factors to allow design decisions to be optimized for the common case. == Fabrication ==