Static and dynamic dataflow machines Designs that use conventional memory addresses as data dependency tags are called static dataflow machines. These machines did not allow multiple instances of the same routines to be executed simultaneously because the simple tags could not differentiate between them. Designs that use
content-addressable memory (CAM) are called dynamic dataflow machines. They use tags in memory to facilitate parallelism.
Compiler Normally, in the control flow architecture,
compilers analyze program
source code for data dependencies between instructions in order to better organize the instruction sequences in the binary output files. The instructions are organized sequentially but the dependency information itself is not recorded in the binaries. Binaries compiled for a dataflow machine contain this dependency information. A dataflow compiler records these dependencies by creating unique tags for each dependency instead of using variable names. By giving each dependency a unique tag, it allows the non-dependent code segments in the binary to be executed
out of order and in parallel. Compiler detects the loops, break statements and various programming control syntax for data flow.
Programs Programs are loaded into the CAM of a dynamic dataflow computer. When all of the tagged operands of an instruction become available (that is, output from previous instructions and/or user input), the instruction is marked as ready for execution by an
execution unit. This is known as
activating or
firing the instruction. Once an instruction is completed by an execution unit, its output data is sent (with its tag) to the CAM. Any instructions that are dependent upon this particular datum (identified by its tag value) are then marked as ready for execution. In this way, subsequent instructions are executed in proper order, avoiding
race conditions. This order may differ from the sequential order envisioned by the human programmer, the programmed order.
Instructions An instruction, along with its required data operands, is transmitted to an execution unit as a packet, also called an
instruction token. Similarly, output data is transmitted back to the CAM as a
data token. The packetization of instructions and results allows for parallel execution of ready instructions on a large scale. Dataflow networks deliver the instruction tokens to the execution units and return the data tokens to the CAM. In contrast to the conventional
von Neumann architecture, data tokens are not permanently stored in memory, rather they are transient messages that only exist when in transit to the instruction storage.
Historically In contrast to the above, analog differential analyzers were based purely on hardware in the form of dataflow architecture, with the property that the programming and computations aren't performed by any set of instructions at all and that there usually weren't any memory based decisions made in such programs. The programming is solely based on the configuration by the physical interconnection of specialized computing elements, which basically creates a form of a passive dataflow architecture. In October 2024, NextSilicon officially announced the launch of its Maverick-2 accelerator, a dataflow chip meant for HPC workloads. Since its launch, NextSilicon has partnered with Sandia National Laboratories to install the chip in the Spectra supercomputer. In July 2025, the startup Efficient Computer was reported to have built a dataflow chip called Electron E1. == See also ==