Following the work on
expected utility theory of
Ramsey and
von Neumann, decision-theorists have accounted for
rational behavior using a probability distribution for the
agent.
Johann Pfanzagl completed the
Theory of Games and Economic Behavior by providing an axiomatization of
subjective probability and utility, a task left uncompleted by von Neumann and
Oskar Morgenstern: their original theory supposed that all the agents had the same probability distribution, as a convenience. Pfanzagl's axiomatization was endorsed by Oskar Morgenstern: "Von Neumann and I have anticipated ... [the question whether probabilities] might, perhaps more typically, be subjective and have stated specifically that in the latter case axioms could be found from which could derive the desired numerical utility together with a number for the probabilities (cf. p. 19 of The Theory of Games and Economic Behavior). We did not carry this out; it was demonstrated by Pfanzagl ... with all the necessary rigor". Ramsey and
Savage noted that the individual agent's probability distribution could be objectively studied in experiments. Procedures for
testing hypotheses about probabilities (using finite samples) are due to
Ramsey (1931) and
de Finetti (1931, 1937, 1964, 1970). Both
Bruno de Finetti and
Frank P. Ramsey acknowledge their debts to
pragmatic philosophy, particularly (for Ramsey) to
Charles S. Peirce. This work demonstrates that Bayesian-probability propositions can be
falsified, and so meet an empirical criterion of
Charles S. Peirce, whose work inspired Ramsey. (This
falsifiability-criterion was popularized by
Karl Popper.) Modern work on the experimental evaluation of personal probabilities uses the randomization,
blinding, and Boolean-decision procedures of the Peirce-Jastrow experiment. Since individuals act according to different probability judgments, these agents' probabilities are "personal" (but amenable to objective study). Personal probabilities are problematic for science and for some applications where decision-makers lack the knowledge or time to specify an informed probability-distribution (on which they are prepared to act). To meet the needs of science and of human limitations, Bayesian statisticians have developed "objective" methods for specifying prior probabilities. Indeed, some Bayesians have argued the prior state of knowledge defines
the (unique) prior probability-distribution for "regular" statistical problems; cf.
well-posed problems. Finding the right method for constructing such "objective" priors (for appropriate classes of regular problems) has been the quest of statistical theorists from Laplace to
John Maynard Keynes,
Harold Jeffreys, and
Edwin Thompson Jaynes. These theorists and their successors have suggested several methods for constructing "objective" priors (Unfortunately, it is not always clear how to assess the relative "objectivity" of the priors proposed under these methods): •
Maximum entropy •
Transformation group analysis •
Reference analysis Each of these methods contributes useful priors for "regular" one-parameter problems, and each prior can handle some challenging
statistical models (with "irregularity" or several parameters). Each of these methods has been useful in Bayesian practice. Indeed, methods for constructing "objective" (alternatively, "default" or "ignorance") priors have been developed by avowed subjective (or "personal") Bayesians like
James Berger (
Duke University) and
José-Miguel Bernardo (
Universitat de València), simply because such priors are needed for Bayesian practice, particularly in science. The quest for "the universal method for constructing priors" continues to attract statistical theorists. Thus, the Bayesian statistician needs either to use informed priors (using relevant expertise or previous data) or to choose among the competing methods for constructing "objective" priors. ==See also==