Essence of implementations Reactive programming language runtimes are represented by a graph that identifies the dependencies among the involved reactive values. In such a graph,
nodes represent the act of computing, and
edges model dependency relationships. Such a runtime employs said graph, to help it keep track of the various computations, which must be executed anew, once an involved input changes value.
Change propagation algorithms The most common approaches to data propagation are: •
Pull: The value consumer is in fact
proactive, in that it regularly queries the observed source for values and reacts whenever a relevant value is available. This practice of regularly checking for events or value changes is commonly referred to as
polling. •
Push: The value consumer receives a value from the source whenever the value becomes available. These values are self-contained, e.g. they contain all the necessary information, and no further information needs to be queried by the consumer. •
Push-pull: The value consumer receives a
change notification, which is a short description of the change, e.g. "some value changed" – this is the
push part. However, the notification does not contain all the necessary information (which means it does not contain the actual values), so the consumer needs to query the source for more information (the specific value) after it receives the notification – this is the
pull part. This method is commonly used when there is a large volume of data that the consumers might be potentially interested in. So in order to reduce throughput and latency, only light-weight notifications are sent; and then those consumers which require more information will request that specific information. This approach also has the drawback that the source might be overwhelmed by many requests for additional information after a notification is sent.
What to push? At the implementation level,
event reaction consists of the propagation across a graph's information, which characterizes the existence of change. Consequently, computations that are affected by such change then become outdated and must be flagged for re-execution. Such computations are then usually characterized by the
transitive closure of the change (i.e. the full set of transitive dependencies a source affects) in its associated source.
Change propagation may then lead to an update in the value of the graph's
sinks. Graph propagated information can consist of a node's complete state, i.e., the computation result of the involved node. In such cases, the node's previous output is then ignored. Another method involves
delta propagation i.e.
incremental change propagation. In this case, information is proliferated along a graph's
edges, which consist only of
deltas describing how the previous node was changed. This approach is especially important when
nodes hold large amounts of
state data, which would otherwise be expensive to recompute from scratch.
Delta propagation is essentially an optimization that has been extensively studied via the discipline of
incremental computing, whose approach requires runtime satisfaction involving the
view-update problem. This problem is infamously characterized by the use of
database entities, which are responsible for the maintenance of changing data views. Another common optimization is employment of
unary change accumulation and
batch propagation. Such a solution can be faster because it reduces communication among involved nodes. Optimization strategies can then be employed that reason about the nature of the changes contained within, and make alterations accordingly (e.g. two changes in the batch can cancel each other, and thus, simply be ignored). Yet another available approach, is described as
invalidity notification propagation. This approach causes nodes with invalid input to pull updates, thus resulting in the update of their own outputs. There are two principal ways employed in the building of a
dependency graph: • The graph of dependencies are maintained implicitly within an
event loop. Registration of explicit callbacks then results in the creation of implicit dependencies. Therefore,
control inversion, which is induced via callback, is thus left in place. However, making callbacks functional (i.e. returning state value instead of unit value) necessitates that such callbacks become compositional. • A graph of dependencies is program-specific and generated by a programmer. This facilitates an addressing of the callback's
control inversion in two ways: either a graph is specified
explicitly (typically using a
domain-specific language (DSL), which may be embedded), or a graph is
implicitly defined with expression and generation using an effective, archetypal
language.
Implementation challenges in reactive programming Glitches When propagating changes, it is possible to pick propagation orders such that the value of an expression is not a natural consequence of the source program. We can illustrate this easily with an example. Suppose seconds is a reactive value that changes every second to represent the current time (in seconds). Consider this expression: t = seconds + 1 g = (t > seconds) Because t should always be greater than seconds, this expression should always evaluate to a true value. Unfortunately, this can depend on the order of evaluation. When seconds changes, two expressions have to update: seconds + 1 and the conditional. If the first evaluates before the second, then this invariant will hold. If, however, the conditional updates first, using the old value of t and the new value of seconds, then the expression will evaluate to a false value. This is called a
glitch. Some reactive languages are glitch-free and prove this property. This is usually achieved by
topologically sorting expressions and updating values in topological order. This can, however, have performance implications, such as delaying the delivery of values (due to the order of propagation). In some cases, therefore, reactive languages permit glitches, and developers must be aware of the possibility that values may temporarily fail to correspond to the program source, and that some expressions may evaluate multiple times (for instance, t > seconds may evaluate twice: once when the new value of seconds arrives, and once more when t updates).
Cyclic dependencies Topological sorting of dependencies depends on the dependency graph being a
directed acyclic graph (DAG). In practice, a program may define a dependency graph that has cycles. Usually, reactive programming languages expect such cycles to be "broken" by placing some element along a "back edge" to permit reactive updating to terminate. Typically, languages provide an operator like delay that is used by the update mechanism for this purpose, since a delay implies that what follows must be evaluated in the "next time step" (allowing the current evaluation to terminate).
Interaction with mutable state Reactive languages typically assume that their expressions are
purely functional. This allows an update mechanism to choose different orders in which to perform updates, and leave the specific order unspecified (thereby enabling optimizations). When a reactive language is embedded in a programming language with state, however, it may be possible for programmers to perform mutable operations. How to make this interaction smooth remains an open problem. In some cases, it is possible to have principled partial solutions. Two such solutions include: • A language might offer a notion of a "mutable cell". A mutable cell is one that the reactive update system is aware of, so that changes made to the cell propagate to the rest of the reactive program. This enables the non-reactive part of the program to perform a traditional mutation while enabling reactive code to be aware of and respond to this update, thus maintaining the consistency of the relationship between values in the program. An example of a reactive language that provides such a cell is FrTime. • Properly encapsulated object-oriented libraries offer an encapsulated notion of state. In principle, it is therefore possible for such a library to interact smoothly with the reactive portion of a language. For instance, callbacks can be installed in the getters of the object-oriented library to notify the reactive update engine about state changes, and changes in the reactive component can be pushed to the object-oriented library through getters. FrTime employs such a strategy.
Dynamic updating of the graph of dependencies In some reactive languages, the graph of dependencies is
static, i.e., the graph is fixed throughout the program's execution. In other languages, the graph can be
dynamic, i.e., it can change as the program executes. For a simple example, consider this illustrative example (where seconds is a reactive value): t = if ((seconds mod 2) == 0): seconds + 1 else: seconds - 1 end t + 1 Every second, the value of this expression changes to a different reactive expression, which t + 1 then depends on. Therefore, the graph of dependencies updates every second. Permitting dynamic updating of dependencies provides significant expressive power (for instance, dynamic dependencies routinely occur in
graphical user interface (GUI) programs). However, the reactive update engine must decide whether to reconstruct expressions each time, or to keep an expression's node constructed but inactive; in the latter case, ensure that they do not participate in the computation when they are not supposed to be active. == Concepts ==