Science For the scientific investigation of efficient causality, the cause and effect are each best conceived of as temporally transient processes. Within the conceptual frame of the
scientific method, an investigator sets up several distinct and contrasting temporally transient material processes that have the structure of
experiments, and records candidate material responses, normally intending to determine causality in the physical world. For instance, one may want to know whether a high intake of
carrots causes humans to develop the
bubonic plague. The quantity of carrot intake is a process that is varied from occasion to occasion. The occurrence or non-occurrence of subsequent bubonic plague is recorded. To establish causality, the experiment must fulfill certain criteria, only one example of which is mentioned here. For example, instances of the hypothesized cause must be set up to occur at a time when the hypothesized effect is relatively unlikely in the absence of the hypothesized cause; such unlikelihood is to be established by empirical evidence. A mere observation of a
correlation is not nearly adequate to establish causality. In nearly all cases, establishment of causality relies on repetition of experiments and probabilistic reasoning. Hardly ever is causality established more firmly than as more or less probable. It is most convenient for establishment of causality if the contrasting material states of affairs are precisely matched, except for only one variable factor, perhaps measured by a real number.
Physics One has to be careful in the use of the word cause in physics. Properly speaking, the hypothesized cause and the hypothesized effect are each temporally transient processes. For example, force is a useful concept for the explanation of acceleration, but force is not by itself a cause. More is needed. For example, a temporally transient process might be characterized by a definite change of force at a definite time. Such a process can be regarded as a cause. Causality is not inherently implied in
equations of motion, but postulated as an additional
constraint that needs to be satisfied (i.e. a cause always precedes its effect). This constraint has mathematical implications Causal efficacy cannot 'propagate' faster than light. Otherwise, reference coordinate systems could be constructed (using the
Lorentz transform of
special relativity) in which an observer would see an effect precede its cause (i.e. the postulate of causality would be violated). Causal notions appear in the context of the flow of mass-energy. Any actual process has causal efficacy that can propagate no faster than light. In contrast, an abstraction has no causal efficacy. Its mathematical expression does not propagate in the ordinary sense of the word, though it may refer to virtual or nominal 'velocities' with magnitudes greater than that of light. For example, wave packets are mathematical objects that have
group velocity and
phase velocity. The energy of a wave packet travels at the group velocity (under normal circumstances); since energy has causal efficacy, the group velocity cannot be faster than the speed of light. The phase of a wave packet travels at the phase velocity; since phase is not causal, the phase velocity of a wave packet can be faster than light. Causal notions are important in general relativity to the extent that the existence of an arrow of time demands that the universe's semi-
Riemannian manifold be orientable, so that "future" and "past" are globally definable quantities.
Engineering A
causal system is a
system with output and internal states that depends only on the current and previous input values. A system that has
some dependence on input values from the future (in addition to possible past or current input values) is termed an
acausal system, and a system that depends
solely on future input values is an
anticausal system. Acausal filters, for example, can only exist as postprocessing filters, because these filters can extract future values from a memory buffer or a file. We have to be very careful with causality in physics and engineering. Cellier, Elmqvist, and Otter describe causality forming the basis of physics as a misconception, because physics is essentially acausal. In their article they cite a simple example: "The relationship between voltage across and current through an electrical resistor can be described by Ohm's law: V = IR, yet, whether it is the current flowing through the resistor that causes a voltage drop, or whether it is the difference between the electrical potentials on the two wires that causes current to flow is, from a physical perspective, a meaningless question". In fact, if we explain cause-effect using the law, we need two explanations to describe an electrical resistor: as a voltage-drop-causer or as a current-flow-causer. There is no physical experiment in the world that can distinguish between action and reaction.
Biology, medicine and epidemiology Austin Bradford Hill built upon the work of
Hume and
Popper and suggested in his paper "The Environment and Disease: Association or Causation?" that aspects of an association such as strength, consistency, specificity, and temporality be considered in attempting to distinguish causal from noncausal associations in the epidemiological situation. (See
Bradford Hill criteria.) He did not note however, that temporality is the only necessary criterion among those aspects. Directed acyclic graphs (DAGs) are increasingly used in epidemiology to help enlighten causal thinking. Causality plays an essential role in the field of Network Physiologyto study the mechanisms through which physiological and organ systems exchange, process, and integrate information within an adaptive dynamic network to generate states and functions at the organism level.
Psychology Psychologists take an empirical approach to causality, investigating how people and non-human animals detect or infer causation from sensory information, prior experience and
innate knowledge.
Attribution: Attribution theory is the
theory concerning how people explain individual occurrences of causation.
Attribution can be external (assigning causality to an outside agent or force—claiming that some outside thing motivated the event) or internal (assigning causality to factors within the person—taking personal
responsibility or
accountability for one's actions and claiming that the person was directly responsible for the event). Taking causation one step further, the type of attribution a person provides influences their future behavior. The intention behind the cause or the effect can be covered by the subject of
action. See also
accident;
blame;
intent; and responsibility. ;Causal powers Whereas
David Hume argued that causes are inferred from non-causal observations,
Immanuel Kant claimed that people have innate assumptions about causes. Within psychology,
Patricia Cheng ;Perception of launching events Some researchers such as Anjan Chatterjee at the University of Pennsylvania and Jonathan Fugelsang at the University of Waterloo are using neuroscience techniques to investigate the neural and psychological underpinnings of causal launching events in which one object causes another object to move. Both temporal and spatial factors can be manipulated. See
Causal Reasoning (Psychology) for more information.
Statistics and economics Statistics and
economics usually employ pre-existing data or experimental data to infer causality by regression methods. The body of statistical techniques involves substantial use of
regression analysis. Typically a linear relationship such as :y_i = a_0 + a_1x_{1,i} + a_2x_{2,i} + \dots + a_kx_{k,i} + e_i is postulated, in which y_i is the
ith observation of the dependent variable (hypothesized to be the caused variable), x_{j,i} for
j=1,...,
k is the
ith observation on the
jth independent variable (hypothesized to be a causative variable), and e_i is the error term for the
ith observation (containing the combined effects of all other causative variables, which must be uncorrelated with the included independent variables). If there is reason to believe that none of the x_js is caused by
y, then estimates of the coefficients a_j are obtained. If the null hypothesis that a_j=0 is rejected, then the alternative hypothesis that a_{j} \ne 0 and equivalently that x_j causes
y cannot be rejected. On the other hand, if the null hypothesis that a_j=0 cannot be rejected, then equivalently the hypothesis of no causal effect of x_j on
y cannot be rejected. Here the notion of causality is one of contributory causality as discussed
above: If the true value a_j \ne 0, then a change in x_j will result in a change in
y unless some other causative variable(s), either included in the regression or implicit in the error term, change in such a way as to exactly offset its effect; thus a change in x_j is
not sufficient to change
y. Likewise, a change in x_j is
not necessary to change
y, because a change in
y could be caused by something implicit in the error term (or by some other causative explanatory variable included in the model). The above way of testing for causality requires belief that there is no reverse causation, in which
y would cause x_j. This belief can be established in one of several ways. First, the variable x_j may be a non-economic variable: for example, if rainfall amount x_j is hypothesized to affect the futures price
y of some agricultural commodity, it is impossible that in fact the futures price affects rainfall amount (provided that
cloud seeding is never attempted). Second, the
instrumental variables technique may be employed to remove any reverse causation by introducing a role for other variables (instruments) that are known to be unaffected by the dependent variable. Third, the principle that effects cannot precede causes can be invoked, by including on the right side of the regression only variables that precede in time the dependent variable; this principle is invoked, for example, in testing for
Granger causality and in its multivariate analog,
vector autoregression, both of which control for lagged values of the dependent variable while testing for causal effects of lagged independent variables. Regression analysis controls for other relevant variables by including them as regressors (explanatory variables). This helps to avoid false inferences of causality due to the presence of a third, underlying, variable that influences both the potentially causative variable and the potentially caused variable: its effect on the potentially caused variable is captured by directly including it in the regression, so that effect will not be picked up as an indirect effect through the potentially causative variable of interest. Given the above procedures, coincidental (as opposed to causal) correlation can be probabilistically rejected if data samples are large and if regression results pass
cross-validation tests showing that the correlations hold even for data that were not used in the regression. Asserting with certitude that a common-cause is absent and the regression represents the true causal structure is
in principle impossible. The problem of omitted variable bias, however, has to be balanced against the risk of inserting
Causal colliders, in which the addition of a new variable x_{j+1} induces a correlation between x_j and y via
Berkson's paradox. Statistical and economic analyses often rely on regression methods applied to observational or pre‑existing data to infer causal relationships. Experimental designs, in contrast, establish causality by systematically manipulating independent variables under controlled conditions. Experiments therefore provide stronger internal validity because causal mechanisms are demonstrated directly rather than inferred from patterns in observational data.
Management shows the factors that cause the effect. Smaller arrows connect the sub-causes to major causes. For quality control in manufacturing in the 1960s,
Kaoru Ishikawa developed a cause and effect diagram, known as an
Ishikawa diagram or fishbone diagram. The diagram categorizes causes, such as into the six main categories shown here. These categories are then sub-divided. Ishikawa's method identifies "causes" in brainstorming sessions conducted among various groups involved in the manufacturing process. These groups can then be labeled as categories in the diagrams. The use of these diagrams has now spread beyond quality control, and they are used in other areas of management and in design and engineering. Ishikawa diagrams have been criticized for failing to make the distinction between necessary conditions and sufficient conditions. It seems that Ishikawa was not even aware of this distinction.
Humanities History In the discussion of history, events are sometimes considered as if in some way being agents that can then bring about other historical events. Thus, the combination of poor harvests, the hardships of the peasants, high taxes, lack of representation of the people, and kingly ineptitude are among the
causes of the
French Revolution. This is a somewhat
Platonic and
Hegelian view that
reifies causes as
ontological entities. In Aristotelian terminology, this use approximates to the case of the
efficient cause. Some philosophers of history such as
Arthur Danto have claimed that "explanations in history and elsewhere" describe "not simply an event—something that happens—but a change". Like many practicing historians, they treat causes as intersecting actions and sets of actions which bring about "larger changes", in Danto's words: to decide "what are the elements which persist through a change" is "rather simple" when treating an individual's "shift in attitude", but "it is considerably more complex and metaphysically challenging when we are interested in such a change as, say, the break-up of feudalism or the emergence of nationalism". Much of the historical debate about causes has focused on the relationship between communicative and other actions, between singular and repeated ones, and between actions, structures of action or group and institutional contexts and wider sets of conditions.
John Gaddis has distinguished between exceptional and general causes (following
Marc Bloch) and between "routine" and "distinctive links" in causal relationships: "in accounting for what happened at Hiroshima on August 6, 1945, we attach greater importance to the fact that President Truman ordered the dropping of an atomic bomb than to the decision of the Army Air Force to carry out his orders." He has also pointed to the difference between immediate, intermediate and distant causes. For his part, Christopher Lloyd puts forward four "general concepts of causation" used in history: the "metaphysical idealist concept, which asserts that the phenomena of the universe are products of or emanations from an omnipotent being or such final cause"; "the empiricist (or
Humean) regularity concept, which is based on the idea of causation being a matter of constant conjunctions of events"; "the functional/teleological/consequential concept", which is "goal-directed, so that goals are causes"; and the "realist, structurist and dispositional approach, which sees relational structures and internal dispositions as the causes of phenomena".
Law According to
law and
jurisprudence,
legal cause must be demonstrated to hold a
defendant liable for a
crime or a
tort (i.e. a civil wrong such as negligence or trespass). It must be proven that causality, or a "sufficient causal link" relates the defendant's actions to the criminal event or damage in question. Causation is also an essential legal element that must be proven to qualify for remedy measures under
international trade law. == History ==