Theory of operation Prediction markets are based on the theory that individuals with financial stakes in an outcome can collectively predict it more accurately than any single expert. Eric Zitzewitz, an economics professor at
Dartmouth, explains "Financial markets are generally pretty efficient, and the evidence suggests that the same is true of prediction markets. There’s no
virtue-signaling in an anonymous market when you're betting[. ...W]hat you're seeing with the market is some average of all of those different opinions, weighted by their willingness to put their money where their mouth is." While prediction markets tend to perform better than polling for the prediction of election outcomes, a study found that belief aggregation of participants who are asked to quantify the strength of their belief can beat prediction markets. When market participants have some intrinsic interest in trying to predict results, even markets with modest incentives or no incentives have been shown to be effective. When the group is more optimistic, they will bet more in aggregate than the pessimists, raising the market price. The movement of the price will reflect more information than a simple average or vote count. Research has suggested that prediction markets' greater accuracy lies largely in superior aggregation methods rather than superior quality or informativeness of responses. In the case of a predictive market, each participant normally has diversified information from others and makes their decision independently. The market itself has a character of decentralization compared to expert decisions. For these reasons, a predictive market is generally a valuable source to capture collective wisdom and make accurate predictions. Prediction markets can aggregate information and beliefs of the involved investors and give a good estimate of the mean belief of those investors. The latter have a financial incentive to price in information. This allows prediction markets to incorporate new information quickly and makes them difficult to manipulate.
Empirical studies Numerous researchers have studied the accuracy of prediction markets: • Steven Gjerstad (Purdue), in his paper "Risk Aversion, Beliefs, and Prediction Market Equilibrium", has shown that prediction market prices are very close to the mean belief of market participants if the agents are
risk averse and the distribution of beliefs is spread out (as with a
normal distribution, for example). •
Justin Wolfers (Wharton) and Eric Zitzewitz (Dartmouth) have obtained similar results to Gjerstad's conclusions in their paper "Interpreting Prediction Market Prices as Probabilities". • Lionel Page and Robert Clemen have looked at the quality of predictions for events taking place sometime in the future and provide evidence for a
favourite-longshot bias. They found that predictions are better when the event predicted is close in time. For events which take place further in time (e.g., elections in more than a year), prices are biased towards 50%. This bias comes from the traders' "time preferences" (their preferences not to lock their funds for a long time in assets). Due to the accuracy of the prediction market, it has been applied to different industries to make crucial decisions. Some examples include: • Prediction markets can be utilized to improve forecasting and have a potential application to test lab-based information theories based on their feature of information aggregation. Researchers have applied prediction markets to assess unobservable information in Google's IPO valuation ahead of time. • In healthcare, predictive markets can help forecast the spread of infectious diseases. In a pilot study, a statewide influenza outbreak in Iowa was predicted by these markets 2–4 weeks in advance with clinical data volunteered from participating health care workers. • Some corporations have harnessed internal predictive markets for decisions and forecasts. In these cases, employees can use virtual currency to bet on what they think will happen for this company in the future. The most accurate guesser will win a monetary prize as a payoff. For example, Best Buy once experimented with using the predictive market to predict whether a Shanghai store could open on time. The virtual dollar drop in the market successfully forecasted the lateness of the business and prevented the company from extra money loss. Although prediction markets are often fairly accurate and successful, there are many times the market fails in making the right prediction or making one at all. Based mostly on an idea in 1945 by Austrian economist
Friedrich Hayek, prediction markets are "mechanisms for collecting vast amounts of information held by individuals and synthesizing it into a useful data point". One way the prediction market gathers information is through James Surowiecki's phrase, "The Wisdom of Crowds", in which a group of people with a sufficiently broad range of opinions can collectively be cleverer than any individual. However, this information-gathering technique can also lead to the failure of the prediction market. Oftentimes, the people in these crowds are skewed in their independent judgments due to peer pressure, panic, bias, and other breakdowns developed out of a lack of diversity of opinion. One of the main constraints and limits of the wisdom of crowds is that some prediction questions require specialized knowledge that the majority of people do not have. Due to this lack of knowledge, the crowd's answers can sometimes be very wrong. The second market mechanism is the idea of the marginal-trader hypothesis. Hanson, Oprea, and Porter (George Mason U), show how attempts at
market manipulation can in fact end up increasing the accuracy of the market because they provide that much more profit incentive to bet against the manipulator. Using real-money prediction market contracts as a form of insurance can also affect the price of the contract. For example, if the election of a leader is perceived as negatively impacting the economy, traders may buy shares of that leader being elected, as a
hedge.
Elections and referendums These prediction market inaccuracies were especially prevalent during the 2016
Brexit vote in the United Kingdom. Prediction markets leaned heavily in favor of the UK staying in the EU and failed to predict the outcomes of the vote. According to
Michael Traugott, a former president of the
American Association for Public Opinion Research, the reason for the failure of the prediction markets is due to the influence of manipulation and bias, shadowed by mass opinion and public opinion. Clouded by the similar mindset of users in prediction markets, they created a paradoxical environment where they began self-reinforcing their initial beliefs (in this case, that the UK would vote to remain in the EU). Similarly, during the
2016 US presidential elections, prediction markets failed to predict
Donald Trump winning. Like the Brexit case, information traders were caught in an infinite loop of self-reinforcement once initial odds were measured, leading traders to "use the current prediction odds as an anchor" and seemingly discounting incoming prediction odds completely. Traders essentially treated the market odds as correct probabilities and did not update enough using outside information, causing the prediction markets to be too stable to represent current circumstances accurately. Koleman Strumpf, a University of Kansas professor of business economics, also suggests that a bias effect took place during the US elections; the crowd was unwilling to believe in an outcome with Trump winning, and caused the prediction markets to turn into "an echo chamber", where the same information circulated and ultimately lead to a stagnant market. Prediction markets can yield better estimates of the mean opinion across a population than opinion polls. A study found that for the five US presidential elections between 1988 and 2004, prediction markets gave a more accurate estimate of the voting result than 74% of the studied opinion polls. On the other hand, a randomized experiment from 2016 obtained that prediction markets were 12% less accurate than prediction polls, an alternative method for eliciting and statistically aggregating probability judgments from a crowd. == Legality and regulation ==