This the ninth and final post of a sequence on the Foundations of Probability. This contains the final Section of the paper on Subjective Probability Does Not Exist.”

Black Swan on Lake

Not finding any black swans in their neighborhoods, Europeans came to the conclusion that black swans do not exist.  “All swans are white” became a universal truth, due to lack of experience with the world. Using even worse logic, uncertainty was legislated out of existence.


Section 7: Conclusions

As Kyburg (1978) states, the theory of subjective probability “is a snare and a delusion”. De-Finetti is unique in recognizing the importance of rejecting ontic probability to the construction of a subjective theory of probability. If the real world is so wild that every moment is unique, and past patterns of events are not of any value in predicting future patterns, then ontic probability does not exist. It is only in this case that subjective probability theory can be constructed. But this theory boils down to a triviality. It says that when there are no external benchmarks for probability judgments, then we can pick any arbitrary number as the probability of an uncertain event, and use this number as the probability for our risk calculations, in the context of choices over artificially constructed lotteries. This choice of an arbitrary number for choices over lotteries does not reflect any knowledge or belief, and does not actually provide any guide to action in the real world.

To make this argument more concrete, suppose climate catastrophe make our planet inhabitable, and a spaceship is launched towards one of the hundred or so exoplanets identified as being potentially hospitable for human life. This identification was based on current human knowledge, and our inability to provide probabilities for habitability is also a result of our lack of sufficient knowledge. There are no reference events to provide a basis for rough probability guesstimates. This is the situation where De-Finetti’s “probability does not exist” is applicable. However, in this situation, construction of subjective probabilities by making arbitrary choices over lotteries will not provide us with any guidance regarding which planet should be chosen as the target. Only more powerful instruments and theories which create greater information about conditions on planets at great distances from us can provide us with knowledge about such probabilities.

The question of whether or not ontic probabilities exist, and whether they are sufficiently stable that past patterns are useful in predicting future patterns, is a question about external reality. It cannot be answered by a priori considerations, or by analysis of language, or by analysis of human cognitive capabilities. The existence of quantum probabilities shows us that probability phenomena are part of the real world. A promising analysis of probability is given by Belnap (2007), who writes that “propensities can in fact be understood as objective single case causal probabilities”. There is very strong empirical evidence that some phenomena are probabilistic, not just at the quantum level, but at the macro level. For types of events where empirical evidence can be brought to bear on probability, considerations of subjective probability are irrelevant. Only well-founded beliefs, based on empirical evidence about real world events, are relevant to analysis of ontic probabilities. Massive amounts of confusion on the topic has been created by the epistemic fallacy, which denies the existence of ontic probability based on our inability to observe and measure it.

Whether or not ontic probability exists in the quantum world has been a subject of ongoing dispute among physicists. Although current consensus is on the side of probability, Einstien famously argued against this, saying that “God does not play dice with the universe”. Suppose the binary sequence of rainfall is a pseudo-random sequence. For those who have the generating key, it is deterministic. For those who don’t know the key, it appears random. In this type of a universe, both Einstein and Bohr could be right. Even though the universe may be deterministic, because of our cognitive and computational limits, our best model for it may be probabilistic.   For the purposes of our present discussion, the Einstein-Bohr dispute does not matter. Even if there are hidden variables knowledge of which would render the world deterministic, quantum probabilistic models provide an accurate match to observed phenomena, so that epistemic probabilities which reflect our state of knowledge are well defined.

Failure to understand the nature of probability has had very serious consequences for the real world. Many prominent economists have argued that collective professional failure to foresee the Global Financial Crisis, occurred because economic theories assume rational agents can form correct expectations about the future. Justin Fox provides details in his meticulously researched history of modern financial economics, The Myth of the Rational Market. Central ideas of Keynes, essential to understanding his analysis of the Great Depression, were rejected by the mainstream orthodoxy among economists because of their rejection of radical uncertainty. In their forthcoming book on “Radical Uncertainty”, King and Kay emphasize the how the forgotten distinction between risk and uncertainty creates a false sense of security.

This paper has the twin goals of explaining the strong attraction of the subjectivist position, the reasons why it continues to dominate, as well as the fatal flaws in this position. The flaws at the heart of positivism have not been clearly understood by economists, with the result that most economists continue to uphold core positivist beliefs, as the survey of Hands (2003) shows. As we have shown in detail, acceptance of positivist ideas leads to the inability to formulate the ideas necessary to clarify the errors of the argument for the existence of subjective probability. The concepts of ontic and epistemic probability, and the difference between choices and preferences are meaningless according to positivist ideas. This makes it extremely difficult to see the flaws in the arguments for the existence of subjective probability.


Anscombe, Francis J., and Robert J. Aumann. “A definition of subjective probability.” The annals of mathematical statistics 34.1 (1963): 199-205.

Ariely, Dan, George Loewenstein, and Drazen Prelec. ““Coherent arbitrariness”: Stable demand curves without stable preferences.” The Quarterly Journal of Economics 118.1 (2003): 73-106.

Ariely, Dan, and Michael I. Norton. “How actions create–not just reveal–preferences.” Trends in cognitive sciences 12.1 (2008): 13-16.

Belnap, Nuel. “Propensities and probabilities.” Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 38.3 (2007): 593-625.

Bhaskar, R. (2013). A realist theory of science. Routledge.

Christensen, David, ‘Clever Bookies and Coherent Beliefs.’ The Philosophical Review C, No. 2, (1991), 229-47.

Cooter, R. and Rappoport, P (1984). ‘Were the Ordinalists Wrong About Welfare Economics?’ Journal of Economic Literature, 22 (2) June, 1984, pp. 507-530.

Clark, M. P., & Westerberg, B. D. (2009). “How random is the toss of a coin?.” Canadian Medical Association journal, 181(12), E306–E308. doi:10.1503/cmaj.091733

Dawid, A. P. (2004). Probability, causality and the empirical world: a Bayes–de Finetti–Popper–Borel synthesis. Statistical Science19(1), 44-57.

de Elía, Ramón, and René Laprise (2005) “Diversity in interpretations of probability: implications for weather forecasting.” Monthly Weather Review 133.5: 1129-1143

De Finetti, Bruno (1937) “La prévision: ses lois logiques, ses sources subjectives.” Annales de l’institut Henri Poincaré. Vol. 7. No. 1. 1937.

De Finetti, B. (1974), Theory of Probability, Vol. 1. New York: John Wiley and Sons

Galavotti, Maria Carla (2019) “Pragmatism and the Birth of Subjective Probability.” European Journal of Pragmatism and American Philosophy 11.XI-1.

Ghirardato, P., & Marinacci, M. (2001). Risk, Ambiguity, and the Separation of Utility and Beliefs. Mathematics of Operations Research, 26(4), 864-890.

Hájek, Alan, “Interpretations of Probability”, The Stanford Encyclopedia of Philosophy (Winter 2012 Edition), Edward N. Zalta (ed.), URL= <;.

  1. W. Hands (2009). Philosophy of Economics, Uskali Mäki (ed.), Vol. 13 of D. Gabbay, P. Thagard and J. Woods (eds.), Handbook of the Philosophy of Science. Elsevier: Oxford

John Kay (2019) Embrace Radical Uncertainty Blog Post

Kay, J. A., & King, M. A. (2020) Radical Uncertainty: Decision Making Beyond the Numbers. W.W. Norton

Keynes, J. M. (1921). A Treatise on Probability. MacMillan and Co. London

Frank, Knight (1921) “Risk, uncertainty and profit.” Hart, Schaffner and Marx Prize Essays.

Kyburg, H. (1978). “Subjective probability: Criticisms, reflections, and problems.” Journal of Philosophical Logic7(1), 157-180.

O’Neill, B. (2011) Exchangeability, correlation and Bayes’ Effect. International Statistical Review 77(2), pp. 241-250.

Ramsey, Frank P. “Truth and probability (1926).” The foundations of mathematics and other logical essays (1931): 156-198.

Savage Leonard, J. (1954). The foundations of statistics. NY, John Wiley, 188-190.

Frederick Suppe (2000) “Understanding Scientific Theories: An Assessment of Developments, 1969-1998,” Philosophy of Science 67: S102-S115.

Nassim, Nicholas Taleb. “The black swan: the impact of the highly improbable.” NY: Random House (2007).

Wong, S. (2006). Foundations of Paul Samuelson’s Revealed Preference Theory: A study by the method of rational reconstruction. Routledge.

Continued from previous post:Section 7 Differentiating between Choice and Preference, from my paper on Subjective Probability Does Not Exist“.


Section 8: The Nature of Probability (the name which must not be spoken)

Serious difficulties in understanding probability arise because of the Vol-de-Mort effect. Wittgenstein invented a theory of language according to which sentences we use are just pictures of facts about the real world that we can observe. Logical positivism seized upon this idea to say that sentences about external reality which can never be verified or disproven by empirical or sensory evidence, have no meaning.  It is not just that ontic and epistemic probabilities do not exist. Rather, we cannot even talk sensibly about these ideas. Positivists assert that sentences which use such words are ‘as meaningless as a cry of pain’. De-Finetti considers mentions of ontic probability on par with fairies, witches, and phlogiston, and aims to ‘root out nonsense’. Such passionate prohibitions of talk make it impossible to consider, conceive, conceptualize, or discuss the idea of ontic probabilities.

Theories of knowledge are also intimately involved with the development of personal subjective probabilities. The epistemic fallacy is strongly encouraged by the “Justified True Belief (JTB)” theory of knowledge.  In our daily lives, we routinely make life-and-death decisions based on guesswork which would not come up to the JTB standards. While crossing the street, I guess at whether the driver intends to stop or to go speeding through the red light by looking at the pattern of acceleration or deceleration of the car. The intentions of the driver are forever unobservable to me, but it is these intentions which control the movement of the car. Similarly, we can only make guesses about ontic, single-case probabilities, which would never reach the JTB status. The quest for certainty relegates all such knowledge to the dustbin, whereas decisions in our daily lives are based almost entirely on guesswork, not JTB knowledge.

Dealing with unobservable and unmeasurable probabilities, and providing the status of knowledge to unverifiable guesses requires a theory of knowledge more tolerant of uncertainty regarding the truth. Pragmatic and instrumental theories provide good candidates for such alternatives. The strong connection between pragmatic theories of knowledge and subjective probability is highlighted by Galavotti (2019). This relationship is important, but too complex to be discussed in detail here. Most relevant to our present concerns is the replacement of knowledge as an internal mental state of information about external reality, by an instrumental conception of knowledge. Probability does not have any external referents, and can be interpreted as being whatever it needs to be, in order to allow us to solve decision problems in face of uncertainty.

Many puzzles about probability resolve upon recognizing that even though single case ontic probabilities are unobservable and unmeasurable, we can still talk about them meaningfully. The issues under discussion can be illustrated in the context of weather forecasts. Our best available models for weather display the ‘butterfly effect’. The smallest of perturbations in the initial conditions leads to dramatically different outcomes. Thus, nanoscale quantum fluctuations can lead to changes in the outcome of “rainfall” or “dry”. Since our best models for quantum events are probabilistic, it makes sense to think of rainfall on a given day R(t) as a Bernoulli random variable which takes values 1 or 0 with probability p(t). Suppose we have observations on a finite sequence of rainfall events R(1),R(2),…,R(T*-1),R(T*),R(T*+1),…,R(T). We have observations on all days until the current day T*, and would like to have some idea about the probability of rainfall in the finite number of future days. Suppose also that they probability p(t) of rainfall for each days is different and unique. Each rainfall event is a unique and single-case event. We only observe a sequence of 0’s and 1’s and the probabilities for each day are unobservable and unmeasurable. According to dominant positivist views, it is meaningless to talk about them.

We can distinguish between three different cases. If daily probabilities fluctuate sufficiently wildly, the past is of no use in predicting the future. This is the case of uncertainty. Even though ontic probabilities exist, this knowledge is of no value to us. For practical purposes of decision making and forecasting, this case is the same as one in which there are no external objective probabilities. The other extreme is when the probabilities follow a predictable and deterministic pattern. During the rainy season, daily rainfall is a virtual certainty, while it never rains in the dry season. Here again probability does not exist, because rainfall is deterministic. Probability is useful in the intermediate case where daily rainfall is not predictable, but the pattern of rainfall is stable enough that past patterns provide a guide to the future pattern. To decide whether or not “probabilities exist”, we need to examine the data of the rainfall record and see if the past patterns can help in predicting the future patterns. Here, as in quantum probabilities, the ontological status of probability can be determined by deep study of the real world. It cannot be determined by playing language games or by introspection and choices over lotteries.

As we have seen, the limiting frequency interpretation of probability makes no sense, and does not apply to the present context either. However, the finite frequency now make sense when understood properly. The observed of ratio of rainfall in the past year is not an estimate of the probability for any single day, since probabilities fluctuate daily. However, in the case of slowly changing probabilities, observed frequency provides a good guess at the probability of rainfall in the near future. Similarly, there is the vexing problem of the ‘reference set’. Which set of days should we consider as being sufficiently matched to conditions for tomorrow that they can be used to compute the probability of rainfall tomorrow? The ideal reference set would be the collection of days with probability identical to that of tomorrow. This does not exist because of unique daily probabilities. Furthermore, even if it did exist, we would never be able to find it, because each daily probability is unobservable and unmeasurable. In practical terms, we can look for days which are sufficiently well matched to tomorrow so that the probabilities of rainfall on those days are suitably close. There is no possibility of exact match, and we can only use rough guesswork, so there is no surprise that different ways of choosing the reference sets would be available and they would lead to different calculations of the probability. We might use rainfall probabilities of the current season, of similar seasons in the past, and many other possible reference sets. The unobservable and unmeasurable nature of probability means that our knowledge of reference sets will always be a rough guess and never advance to the status of JTB knowledge. Thus ontic probability aligns with pragmatic theories of knowledge.

Previous post: Forcing People to Believe

Little boy choosing between a cupcake and apple

Section 5: Differentiating Between Choice and Preference (from “Subjective Probability Does Not Exist)

What just happened? As in all magic, appearances are deceiving. What actually happened was very different from what was perceived. Ansa was right all along. Her choices were arbitrary and did not reveal any knowledge hidden in her heart. She did not acquire knowledge about Tokyo weather by introspection. In olden times, a European visitor to Cathay could tell tall tales about fire-breathing dragons and cities of gold on his return, because no one could compare the stories to an external reality. Similarly, if there is no objective probability, then we can believe anything at all about probability, without any consequences. If Ansa arrives at 86% while Asma arrives at 30% for the same event, there is no way to decide who is wrong or right. We cannot cross-check these numbers against Tokyo weather a week from now. Neither rainfall nor lack of it will confirm or deny either estimate. Because of climate change, every day is a unique non-replicable weather event, and it is impossible to measure the probability of rain. In order to critique them, it is worth encapsulating the results of the previous section in terms of a formal theorem.

The Fundamental Theorem of Subjective Probability: Suppose rational agent A has to make decisions which depend on the occurrence, or failure to occur, of event G. Then N choices will approximately reveal the subjective probability PA(G) (to within 2-N%) that agent A ascribes to G. Rational agents A must make decisions involving the event G as if they believe the event will occur with probability PA(G) .

The fundamental theorem creates a subjective probability out of thin air, and banishes uncertainty from the world by converting it to risk. At the heart of this magic is a confusion between preference and choice created by positivism which refuses to distinguish between the two. Preferences being an unobservable internal state of the heart, can only acquire meaning when expressed in the form of observable choices. This is another instance of the epistemic fallacy, where we deny existence of preference within hearts because we cannot observe them. But here the problem is sharper because we can actually observe our own internal states of the heart and mind. A repentant positivist used the apt phrase “Feigning Anesthesia” for this problem created by positivism:

Positivism Leads to Feigning Anesthesia: Our own internal sensations, states of knowledge, and preferences are directly observable to us, but forever remote, inaccessible and unobservable to external observers. Denying the relevance of unobservables requires us to feign anesthesia by denying our knowledge of our own internal sensations.

Discussion: Our choices can ‘reveal’ our preferences to others, but not to our own selves, because we directly observe these preferences and consult them to make choices. Similarly, external observers can infer our states of knowledge and beliefs from our actions, but we do  not need to do this because we have direct access to our own knowledge and belief. When Ansa says she does not know the probability of rainfall, this is the last word on the subject, because Ansa directly observes her own state of knowledge. The last step of magic requires us to persuade Ansa to feign anesthesia by requiring her to forfeit her intuitive conceptions of knowledge and replace them by positivist concepts based on observables. It is this substitution which creates the possibility of defining an internal mental state of knowledge of Ansa in terms of perceptions of her actions, and their interpretation by an external observer. The “revealed” state of knowledge by choices, is very different from the actual state of lack of knowledge of Ansa. Knowledge is the basis for preference, while choices must be made regardless of whether or not knowledge exists.  Because confusion between choice and preference is at the heart of the fundamental theorem, we clarify the distinction by several examples.

Samuelson’s Mistake: Wong (2006) explains how Samuelson set out to make economic theory scientific by replacing the unobservable preferences at the heart of utility theory by the observable choices which reflect these preferences. But the program fails because any regularities in observed choices must necessarily be due to the underlying preferences. To see this, consider a Buddhist who has successfully eliminated all desires for worldly goods from his heart. His choices over various types of consumption bundles will be completely arbitrary, not reflecting any consistency or coherence. We will be unable to compute his utility function, because of conflicting and contradictory information about his preferences conveyed by his arbitrary choices. When we assume that choices satisfy transitivity, we are actually assuming the existence of unobservable utilities. In fact, as was proven mathematically, axiomatization of choices was exactly equivalent to a set of assumptions about unobservable preferences, and nothing was gained by moving from preference to choice.

Those of us who have been conditioned by positivism into feigning anesthesia require further explanation of the difference between choices and preferences. The key to understanding this difference is to note that when preferences exist, they govern choices. However, when preferences do not exist, then choices are made arbitrarily. An arbitrary choice does not reflect a preference. For example, consider a choice between LU(1) and LU(99). Nearly everyone would prefer an urn with 99 black balls over an urn with just one black ball, for a lottery paying $5 on the draw of a black ball. Thus the choice of LU(99) over LU(1) is a preference based on knowledge about the number of black balls. However, the choice between LU(50) and LG is made arbitrarily, because lack of knowledge of the probabilities of rainfall in Tokyo makes it impossible to have a preference for one of the two lotteries. This difference between preference and choice is brought out clearly in the Ellsberg paradox. When there is genuine preference based on knowledge of probabilities, it is impossible to strictly prefer LU(50) to both LG and LG*. One of the two probabilities p(G) or p(G*) must be greater than or equal to one half. However, in absence of knowledge of the probability, risk averse agents will strictly prefer LU(50) to both LG and to LG*. This phenomenon has been widely observed in human behavior and has been labeled ambiguity aversion. See, for example, Ghirardato & Marinacci (2001).

Choosing over lotteries for events of unknown probability is similar to choosing between unknown goods. Preferences do not exist, but choices must be made. Ariely et. al. (2003) in “Coherent Arbitrariness” investigate consumer choice of price for an unusual and unfamiliar good of value unknown to the consumer, where the price is known to be bounded between $1 and $100. In a sequence of choices between goods and cash, they show that the initial choices are arbitrary, and then later choices conform to a dollar valuation for the unfamiliar good chosen arbitrarily in the initial sequence of choices. They can demonstrate the arbitrariness by using the phenomenon of “anchoring” discovered by experimental psychologists. In decision making in situations of complete ignorance, the mention of any random number, completely unrelated to the decision at hand, serves to anchor decisions. Subjects swimming in a sea of uncertainty, grab at these numbers and use them for their decision. Ariely et. al. (2003) told experimental subjects (Harvard Business School students) to compare the value of some unfamiliar but real objects with the last two digits of their social security number, interpreted as a dollar price. The experiments show that students with high two-digit numbers consistently valued the unknown objects at higher prices, because their valuations were anchored by this random choice of numbers.

Translating this result to our context, suppose that Ansa is asked to write down the last two digits ‘xy’ of her mobile phone number, before the start of the experiment. Then, as a first choice, she is asked to choose between lottery LU(xy) and LG . According to the anchoring phenomena, agents with high two digit numbers will tend to “reveal” higher subjective probabilities for G and agents with low two digit numbers will tend to ascribe lower probabilities to G. This is different from the behavior of agents who actually possess knowledge of the probability of the uncertain event G. The knowledgeable agents will not be influenced by any randomly chosen external anchors, and will make decisions in conformity with their pre-existing knowledge. For example, suppose that I know that it rained 105 days out of 365 days last year in Tokyo. Then I will use 29% » 105/365 as a guess about the probability of rainfall, and make choices over lotteries accordingly. In this situation, my knowledge about the past rainfall record creates a preference which guides my choices, and will not be influenced by arbitrary anchors.

One consequence of arbitrary choice is that the choices creates preference, rather than the other way around. Ariely and Norton (2008) describe this in “How actions create–not just reveal–preferences.” Similarly, for rational agents without knowledge of the probability, the first seven choices will be made arbitrarily, but later choices will conform to this revealed probability. However. It is a mistake to think that this sequence of arbitrary choices creates an epistemic probability and a preference. This is the mistake at the heart of subjective probability. The simplest way to differentiate between arbitrary choices and preference-based choices is to ask. Preferences, beliefs, knowledge are directly observed internal states. If we do not feign anesthesia, then we can differentiate between what we know and what we do not know. Tokyo-dweller Kazuo will have knowledge that probability of rain next week is low because this is the dry season. Epistemic probability comes from knowledge about the real world, and leads to preferences over lotteries. This knowledge remains stable over a wide variety of changing decision circumstances. However arbitrary choices do not lead to knowledge. If Ansa is planning an open ground picnic in Tokyo next week, she will not rely on her revealed probability of 86% to decide on whether or not rain-proof tents should be used. The probability revealed by her arbitrary choices over lotteries is not useful in context of real-world decisions over uncertain events.

To be continued – for full paper see Subjective Probability Does Not Exist


We are delighted to inform that the Discussion Forum for the WEA Conference Going Digital: What is the Future of Business and Labour? has been extended to 16th December, 2019. 

Join us to discuss recent contributions to the understanding of digital economy and its consequences for business trends and labour challenges!

All papers are available HERE. You can participate in the Discussion Forum by commenting on specific papers, or contributing to a general discussion on the Complexities in Economics. In the spirit of debate, authors are asked to respond to the comments on their papers as well as on related general remarks.

Comments are moderated prior to posting to ensure no libellous or hateful language. 


Keynote Papers

  1. Grazia Ietto-Gillies, “Digitalization and the transnational corporations. Rethinking economics”
  2. Peter Söderbaum, “Ecological Economics in relation to a digital world”

Selected Contributions

  1. Bin Li, “How Could The Cognitive Revolution Happen To Economics? An Introduction to the Algorithm Framework Theory”
  2. Marc Jacquinet, “Artificial intelligence, big data, platform capitalism and public policy: An evolutionary perspective”
  3. Guilherme Nunes Pires, “Gig economy, austerity and “uberization” of labor in Brazil (2014 – 2019)”
  4. Alessandro Zoino, “Predicting Stock Returns: Random Walk or Herding Behaviour?”


A reminder for those, who wish to obtain a conference participation e-certificate. You can still do so by completing your official registration here and paying $10.

Otherwise than that, registration is not required for participation in this conference. You can read and comment on the papers without it.


The Association’s activities center on the development, promotion and diffusion of economic research and knowledge and on illuminating their social character. The WEA makes full use of the digital technologies in the pursuit of these commitments.

We look forward to having you participate in the Discussion Forum.

Maria Alejandra Madi, Conference Leader and a Chair of the WEA Conferences Program

Malgorzata Dereniowska, Co-Leader and a member of the WEA Conferences Planning and Organization Committee

Continues from previous post (Fallacies of Frequentism and Subjectivism); this is Section 4: Forcing People to Believe, of my paper on Subjective Probability Does Not Exist


We now have the conceptual framework require to reveal the amazing secrets of one of the most spectacular magic tricks ever performed in human history, which continues to deceive millions. We will show how we can force a probability belief upon an unwilling Agent. Agent Ansa claims not to know the probability that it will rain in Tokyo one week from today. We will not only prove her wrong, we will actually produce the probability belief that she has, to within 1% accuracy.

In order to force a belief upon her, we must start by undermining her self-confidence. We ask her if she knows anything about any probability. She claims to know the probabilities of coin flips, dice, and cards. We ask if she has personal experience with flipping coins for a long time. When she acknowledges her ignorance, we can browbeat her by citing Clark and Westerburg (2009) who show that coin flips can be manipulated to produce bias towards heads. She should be flattened upon learning that experimental evidence shows that the coin will land on its edge about 1 in 6000 times.  As the opening shot of De-Finetti’s book (“Probability does not exist”) shows, the first step of magic happens when Ansa relinquishes her intuitive conceptions of probability and cedes to our authority to define this for her. This step is made much easier by an empiricist mindset which creates doubts about existence of the unobservable and unmeasurable, invoking the widely believed epistemic fallacy.

At the second step, we cast around for suitable alternatives to intuitive probability. We ask Ansa if she remembers how probability was defined in the textbook which told her that coin flips lead to 50% probability of heads. She vaguely recalls the limiting frequency definition, which she memorized to reproduce on the exams, even though it didn’t make much sense to her. We re-assure her that her doubts about the legitimacy of this maneuver are justified. There is no way to observe a limiting frequency in the real world, and no way to make the definition applicable to the probability of a single coin flip. One of the leading authorities on probability, William Feller explained the problem as follows: “There is no place in our system for speculations concerning the probability that the sun will rise tomorrow. Before speaking of it we should have to agree on an (idealized) model which would presumably run along the lines “out of infinitely many worlds one is selected at random…” Little imagination is required to construct such a model, but it appears both uninteresting and meaningless.” Feller does not seem to realize that sunrise is not special in this respect. For any real-world event, infinite replications can only take place in an imaginary world. Once probability is defined as a limiting frequency, it is easily seen that this definition has no implications – none whatsoever – for any finite sequence of trials of any real-world event.

It requires only a simple sleight of hand to convert this rejection of frequency theory to a rejection of ontic probability. To Ansa, we can just say that all efforts by leading experts to define probability as a characteristic of real-world events have failed. For example, in the eminently practical context of weather forecasting, de Elía, Ramón, and René Laprise (2005) show that there is no agreement on how to define the probability of rain tomorrow. It is worth noting that the sleight-of-hand consists of assuming that if frequency theory is wrong, then probability cannot be defined; the possibility of other coherent definitions is ignored.

It is only after we have knocked out the intuitive and the external objective conceptions of probability that room is created to implant the notion of subjective probability. For this third step, we have to encourage Ansa to emulate the audacity of Kant, who dared to think that space and time are characteristics of our minds which we project onto external reality. We encourage Ansa to think that probability is really a projection of our minds onto external reality. When a coin is flipped, we do not know the initial conditions and forces acting on the coin. Thus, a statement about probability is really a statement about our internal states of (lack of) knowledge about the real world. Once Ansa accepts this radical (but reasonable) idea, we are well on our way towards the goal of implanting a belief into her mind.

Let G be event of rainfall in Tokyo one week from today. We now propose to calculate the personal probability she attaches to the event: PA(G). This probability is hidden within the mind of Ansa, unknown even to her. The easiest way to extract it is via comparison with some benchmark probabilities. But we have just destroyed probabilities, and so we need to reproduce them in disguise, as outcomes of rational behavior on part of Ansa. We can accomplish this as follows.

We ask Ansa to contemplate an Urn and Ball setup. 100 Balls will be placed into an Urn. N of the balls will be black and 100-N will be white. We will shuffle the balls vigorously to create an even and uniformly distributed mixture of black and white balls within the urn. Next, we will blindfold a picker, and ask him to reach deep into the urn and pick out one of the balls. We will say the event U(N) has occurred if a black ball is drawn, while U*(N) denotes the complementary event of drawing a white ball. Whether on not ontic single-case probabilities are well defined is subject to controversy and discussion. However, the knowledge of N, the number of black balls, does have a logical implication for our actions and decisions. In particular, define LU(N) to be a lottery which pays $5 if event U(N) occurs, and a black ball is drawn. It seems a matter of pure logic that any process which treats all balls in the same way will lead to increasing occurrences of U(N) as N increases. We must persuade Ansa that rational agents would have preferences over these lotteries which are monotonic in N:

Monotonic Preferences over Benchmark Lotteries: All rational agents prefer LU(N) to LU(M) if N is greater than M.

For our arguments, we do not need consensus over rational agents; it is enough that Ansa has monotonic preferences over these lotteries. Having set up our benchmarks, we are now in position to measure the subjective probability that Ansa assigns to G. G is the event of rainfall in Tokyo one week from today, to be ascertained by the official pronouncements of the Japanese Meteorological Agency. The lottery LG pays $5 if it rains in Tokyo, and $0 if it does not. We will ask Ansa make choices which compare LG with the benchmark lotteries, in order to calculate  the personal probability that Ansa assigns to the event G.

Seven Steps to Revealed Probability: At the first step, ask Ansa to choose between LU(50) and LG. Her choice “reveals” one of two possibilities: (i)  or (ii) . Thereafter, at each step, ask her to choose between the midpoint of the range of revealed probabilities and LG. Each step halves the interval of possible beliefs. Since 27=128, in seven steps we can determine Ansa’s belief to with ±1%.

Suppose that at the end of the seven steps, we find that Ansa chose L(G) over LU(85) and she chose LU(87) over L(G). We triumphantly pronounce that Ansa has revealed a personal probability of about 86% for rainfall in Tokyo. Just like the magician pulls a rabbit out of a hat, we reached into the mind of Ansa, and pulled out a probability that nobody knew was there! Suppose that Ansa resists this conclusion. She complains that her choices were arbitrary. She does not have any beliefs or knowledge about rainfall in Tokyo. We have a powerful counterargument to this claim. We claim that rationality commits her to use this “revealed” 86% probability for all subsequent decisions over the lotteries LG and LU(N). This can easily be proven.

Coherence, Commitment, and Rationality: Any choice of Ansa which conflicts with a prior choice is irrational.

Explanation: Suppose that Ansa chose LG over LU(85), revealing that . Next we offer her a choice between LU(60) and LG. She could now choose LU(60), revealing that , in conflict with her previous ‘revelation’. This would show that her choices were arbitrary, not aligned with any prior belief. However, inconsistency (or incoherence) is irrational. Consider the effect of the inconsistent choice. Ansa ends up with LG from the first choice and LU(60) from the second choice. Suppose she reverses both choices to make them consistent. Then she would have LU(85) from the first choice and LG from the second. The pair of lotteries (LG,LU(85)) dominates the pair (LG,LU(60)), because LU(85) is better than LU(60). Rationality commits Ansa to make choices consistent with the probabilities she revealed in the seven steps.

We have already re-defined the meaning of probability for Ansa. To clinch the argument, we also need to re-define the meaning of “knowledge”. We inform her that internal mental states are in constant flux, and she should ignore her “feeling” that she does not “know” about the probability of rainfall. The correct understanding of knowledge is that it is a guide to action. It is obvious from her choices that  guides her choices over lotteries. On this basis, we may conclude that this number represents her knowledge about the probabilities of rainfall in Tokyo. Reluctantly, Ansa concedes to having “knowledge” about Tokyo weather, without her own awareness of this knowledge. The seven choices have served to reveal this knowledge to both external observers and to herself.

Applause – Bows – Curtains

This concludes Section 4: Forcing People to Believe, of my paper on Subjective Probability Does Not Exist. Next Post on Section 5: Differentiating Between Choice and Preference. Previous post  was Fallacies of Frequentism and Subjectivism.


Continues from previous post

Fallacies of Frequentism and Subjectivism –  The limiting frequency is never observable. The only way to prove it exists is to assume single case ontic probabilities exist. Thus, it leads to a circular definition for probability. The alternative of subjectivist probability is based on the epistemic fallacy: It assumes that if we cannot know the probability of event, then the probability does not exist — These two points are explained in this post which is PART 2 of Section 3: Blinders of Empiricism, from the paper: Subjective Probability Does Not Exist.


Proposition 4: Defining the ontic probability of an event G in terms of a limiting frequency is a non-sequitur.

Discussion: Empiricist philosophy is based on a plausible intuition. We have no access to the unobservable thing-in-itself, except by our observations. The ontological status of unobservable theoretical entities can never be determined conclusively. We posit theoretical entities to organize and explain patterns we see in the observations. Therefore, it should be possible to understand the unobservable entities by replacing them with observable patterns that they were created to describe. This effort to replace unobservables by observable implications was given additional impetus by the linguistic turn in philosophy, which suggested that ambiguities in our language are the source of major philosophical confusions. We come to believe in ghosts in the machine when we use the unobservable “force of gravity” to describe the observable elliptical orbits of planets. The internal preferences of the heart are best understood by replacing them with the observable choices that they lead to. Some of the finest minds of twentieth century put their best efforts into carrying out this program of achieving clarity in language by replacing unobservables by observable equivalents. The failure of this program has been acknowledged by its most ardent defenders. Frequency theory is just one of the many dramatic failures of the positivist mission of purifying language.

Frequency theory involves an obvious contradiction. We can never carry out an infinite sequence of trials. If the goal is to define probability via an observable, this method obviously fails. The only way to show that the limiting frequency exists is via theorems of probability theory which assume existence of ontic, single case probabilities. Thus, the limiting frequency definition of probability is circular. It assumes existence of ontic probabilities in order to define what they are. All attempts to remove this defect, by using some finite set of trials as an approximation to the limiting frequency, fail as definitions of observable probability; see Section 3.4 of Hajek (2012). Given that ontic probability exists, a finite sequence of trials can only give us imprecise information about the value of p(G). It is not possible to establish existence of ontic probabilities by any sequence of finite observations.

A deep dilemma faced empiricists confronted with an unobservable and unmeasurable ontic probability. On the one hand, an abundance of real-world applications showed the widespread applicability and importance of the probability concept. On the other hand, as clarified in propositions 3 and 4, there is no meaningful way to reduce probability of an event to any observable counterpart in external reality. Strict adherence to empiricism calls for abandoning use and mention of this concept. The standard approach taken to resolve this problem has the appearance of retaining both empiricism and probability, but actually abandons both of these concepts. Defining probability as a limiting frequency abandons empiricism because the limiting frequently is conceptually impossible to observe. It also abandons probability, because for any finite sequence of events, the limiting frequency imposes no conditions on how they behave. Hajek (2012, Section 3.4) discusses these unresolved difficulties with frequency theory in detail. In recognition of these difficulties with defining probability Bertrand Russell said that: Probability is the most important concept in modern science, especially as nobody has the slightest notion what it means.

Kant defined the Enlightenment as the process of breaking the chains of conventional thought, and daring to think for ourselves. Analyzing the relationships between our minds and external reality, he came to the breathtakingly bold conclusion that space and time do not exist in external reality. These are just projections of our mind which organize the complex stream of sensory data input. A natural resolution to the conflict between empiricism and the obvious relevance and importance of ontic single case probability would have been to reject empiricism, and accept realism. Instead, extreme positivist De-Finetti courageously followed through on the implications of this rejection of ontic probability to create entirely new foundations for probability theory. Following his illustrious predecessor Kant, he argued that probability does not exist in external reality. Rather, it is a projection of our minds, which serves to organize diverse observations into a coherent pattern. He proceeded to construct a theory of revealed probability based upon a theory of rational behavior under uncertainty. Just as Kant’s theories about space and time were rejected due to Einsteinian discoveries of the counter-intuitive nature of space and time, so the discovery of quantum probabilities creates serious problems for De-Finetti’s rejection of ontic probabilities. However, the fundamental problem with rejection of ontic probability lies elsewhere:

Proposition 5: De-Finetti’s rejection of ontic probability is an example of the ‘epistemic fallacy’: our (epistemic) inability to observe or measure the ontic single case probability does not entail the (ontological) conclusion of non-existence of this probability.

Roy Bhaskar, who coined the term, has also described the widespread prevalence of the epistemic fallacy. At the time when the concept of light wave frequencies and instrumentation to detect them did not exist, ultraviolet and infra-red still existed, even though these colors were beyond the range of our cognitive and observational capabilities. Limitations of our sensory and cognitive capabilities do not constrain external reality. Karl Popper correctly argued that scientific theories can never be proven. In a similar way, posited real entities which serve to explain and organize observable events may forever remain outside the reach of our observations, or of conclusive proof by indirect methods. The idea that scientific theories can safely ignore real objects which are unobservable things-in-themselves is wrong. Bhaskar Roy (2013) provides a persuasive realist philosophy of science (popularly known as Critical Realism), which provides a coherent account of how unobservable real objects and effects enter scientific theories. Even when real entities and effects are not measurable and observable, meaningful debate based on logic and indirect empirical evidence can be carried out about their existence or non-existence. The debate between Einstein and Bohr on whether or not ‘God plays dice’ (about the existence of quantum probabilities) is just one of numerous examples of this.

A full-fledged theory of ontic single case probabilities does not currently exist; see Section 3.4 of Hajek (2012) for a discussion of the debates regarding propensity theory, the best available account. However, our arguments against subjective probabilities do not require existence of ontic probabilities. As we will see in the next section, it is enough that there should exist an external benchmark probability (even a subjective and personal one), against which comparisons of subjective probability are possible.

In the next section, we will replicate the argument of De-Finetti in an extremely simple and transparent context. The subjectivist argument for existence of revealed probabilities is valid. However, the interpretation of the theorem as showing the existence of “knowledge” requires the identification of revealed probabilities with epistemic probabilities. This identification is wrong, as we will clarify by example

This continues from the previous post,  Risk Versus Uncertainty .

In our daily lives, we routinely look past surface appearances to arrive at deeper truths hidden beneath the surface. Science involves looking at falling apples and deducing the existence of gravity. However, empiricism prohibits us from going beneath the surface, and restricts us to the world of appearances. This handicaps our thoughts and theories, and is especially damaging in terms of understanding probability.


Section 3: The Blinders of Empiricism (part 1)

The Arabs have many words for sand, and the Eskimos have many words for snow. According to the Sapir-Whorf hypothesis, lack of vocabulary can prevent us from seeing the differences in sand and snow which can be seen by those who have the words to express the difference. There is no doubt that linguistic confusions form an important reason for a century of persistence of an erroneous argument which informs us that we know – despite our feelings to the contrary – the probability of every uncertain event. The impoverished vocabulary which prevents us from seeing the flaws in the argument is due to logical positivism, which denies meaning to several concepts essential to understanding probability. We have now developed the terminology required to explain the nature of the subtle and complex mistakes which have been made in the Ramsey – De Finetti – Savage argument which establishes the existence of subjective probabilities. We use this terminology to provide the philosophical foundations which underlie their argument. In the next section, we replicate their argument in a simplified model for subjective probability and explain why it does not prove what it claims to prove.

Proposition 1: Empiricism prohibits us from talking about single case ontic probability.

Discussion: Intuitively, even those who deny its existence, understand the meaning of the concept of ontic probability for a coin flip. However, for a single coin flip, there are no observations which can provide any empirical evidence in favor of, or against, the modal idea that even though the outcome of “tails” occurred, the other outcome of “heads” was possible. A strong empiricist tradition, starting from Hume, prohibits us from talking about such things. The statement that the probability of heads is 50% as the outcome of a single coin flip is not a “fact” about the real world which can be observed. Hume would have us burn this statement. Kant would call probability a “thing-in-itself” and tell us that it is futile to think about it. Wittgenstein admonishes us to not speak about the non-factual: “Whereof one cannot speak, thereof one must be silent.” It is significant that one of the founders of subjective probability, Frank Ramsey, was a close friend of Wittgenstein, and re-expressed this aphorism as: “What we can’t say we can’t say, and we can’t whistle it either.” About the other principal architect of subjective probability, Bruno de Finetti, Dawid (2004) writes that “de Finetti’s philosophical approach might be described as ‘Machian’ or ‘extreme positivist’. It ascribes meaning to only those events and quantities which are observable.”

Wrong ideas of dead philosophers which prohibited talk about the obvious intuitive meaning of probability have resulted in a century of confusion about the meaning of probability. The most important of these confusions is the dominant conception of probability as a limiting frequency

Proposition 2: The Ontic probability of an event p(G) can only be “observed” in external reality as the limiting frequency of an infinite sequence of repetitions.

Explanation: Many kinds of theoretical calculations can lead to computations of precise probabilities for events. Roulette, Dice, and quantum events have probabilities which can be computed via a theoretical model which generates the probabilities of all possible outcomes. All such calculations must necessarily be based on a theoretical model we have for different possible outcomes. The “observation” in Proposition 2 does not refer to a theoretical, model-based calculation, but rather some aspect of external reality which can be measured to equal the theoretical probability p. However, in external reality only one of many possible outcomes can be observed. For example, if G is the event of rainfall, some weather forecasting models lead to computed values of p(G), but in external reality we can only observe 1 or 0 – whether the rainfall occurs or not. The probability number itself cannot be observed in any single trial. We might try to extend to a finite sequence of repeated trials. However, probability theory itself informs us that on any finite sequence of Bernoulli trials, the probability of an exact match between the observed frequency of occurrence of G and theoretical probability p(G) is low, and converges to zero as the number of trials increases to infinity. The number p only reveals itself in external reality in the limiting frequency, as the number of trials goes to infinity.

It is worth emphasizing this result: The only possible way to measure the ontic probability p(G) in external reality is as a limiting frequency of an infinite sequence of trials. Lord Kelvin expressed dominant views about science and measurement in his dictum that: “when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind”. The limiting frequency is the only possible objective and scientific definition of probability, according to these positivistic conceptions of science. This accounts for the dominance and popularity of the limiting frequency definition, despite many difficulties associated with this definition. For a discussion of these difficulties, see Section 3.4 of Hajek (2012).

Proposition 3: Single Case Ontic Probability is not observable and is not measurable.

Discussion: By unobservable, I mean that a single case probability asserts the possibility of occurrence of multiple events, but we can actually observe only one of them. It is important to note that this may be, conceptually, a limitation of our human sensory capabilities. The TV Series Doctor Who depicts an ancient extra-terrestrial race of Time Lords, who can sense forks in the timelines running to the future. For example, the timelines may branch into a world in which heavy rainfall occurs, preventing or frustrating an invading force, and another in which heavy winds diverted the rainfall leading to a successful attack. Human beings can observe only one of the two possible outcomes, rendering probability unobservable to us. Lack of measurability of ontic probability refers to the impossibility of carrying out infinite trials, which is the only possible way of measuring this probability.

Next Post: Fallacies of Frequentism and Subjectivism (5/10) which is part 2 of Section 3 of my paper of Subjective Probability Does Not Exist