Impact of Positivism on Economics & Statistics

Wikipedia article entitled “Getting to Philosophy” documents a very deep fact about the structure of human knowledge. If we click on the first link in any article, and keep on doing so – click the first link in the subsequent articles, we eventually get to an article in the Philosophy category. It is commonly thought that philosophy is an idle pursuit of fancy ideas with no practical relevance. In fact, our entire knowledge base is founded on philosophy, without our awareness. In this final section of this introductory chapter, I will provide some more details of how the philosophical foundation of logical positivism have deeply shaped the disciplines of modern Economics and Statistics. 


  1. Due to a century of bloody warfare among Christian factions, European intellectuals rejected religion.
  2. Since religion had proven unreliable as a foundation for knowledge, they sought to rebuild knowledge on CERTAIN foundations.
  3. They thought the observations and logic were the only sources of CERTAIN knowledge – guesses at underlying reality could be mistaken.

This eventually led to the philosophy of Logical Positivism. For a sentence to be eligible as potential knowledge, it must be possible to VERIFY whether it is true or false. Sentences which can never be verified are considered “meaningless”.  Empirical verification means by confirmation using our human senses. In the strict sense used by logical positivists, the statement “God Exists” is not FALSE, it is meaningless – it cannot be subjected to an empirical test which will tell us whether it is true or false. This was one of the explicit GOALs of Logical Positivists: To prove science is right, and religion is wrong.

This theory of knowledge has very strong and disturbing implications about What is NOT knowledge:

  1. I think my wife is upset, because I forgot our anniversary.
  2. I think the driver of the car coming towards intersection is planning to stop, therefore, I can proceed through without slowing down.
  3. I think that cold weather causes colds, and smoking causes lung cancer.
  4. I flipped the coin and it came up heads. BUT, it had an equal chance of coming up tails.

None of these statements can be VERIFIED empirically to determine TRUE/FALSE conclusively. All are statements about the unobservable. The above statements are all MEANINGLESS according to logical positivist philosophy. What this really shows is that positivism is an ABSURD philosophy. Unfortunately, this philosophy is the foundation of a Western education. 

In plain language: Nearly ALL of our knowledge is FALLIBLE: It could be wrong. LP says that for a sentence S to qualify as knowledge, it must not be FALLIBLE. This means that we have no knowledge, except of that which we can touch and see. (EVEN that is not clear).  This is an ABSURD epistemological stance which reflects Epistemic Arrogance: It is possible to have certain knowledge

In contrast, Islam teaches Epistemic Humility: We (mankind) have been given very little knowledge. We humbly accept – as knowledge – our guesses at truth, because we have no option. These guesses are graded according to reliability.

Impact of Positivism on Economics

Logical Positivism had a devastating impact on the development of Economic Theory in the 20th Century. Human motivations, hidden in our hearts, are unobservable to others, sometimes even to ourselves. LP authorizes us to ignore genuine motivations. Instead, it allowed us to use any model for motivations – they are all equally unverifiable, because they cannot be cross-checked against the unobservable reality. The only criterion for goodness of a model was the fit to observed behaviors. This led economists to the following ASSUMPTION: Rational Humans only motivated by pleasure from consumption of goods and services. This assumption of utility maximization behavior represents a massive misunderstanding of motivations for human behavior. It has been repeatedly refuted by behavioral economists, but economics textbooks ignore this, and continue to teach this false model. See Karacuka and Zaman.

Human motivations are observable to us via introspection, but sources of human welfare are hidden deeper. Logical Positivism reduced the unobservable to an observable by assuming that we always make choices that maximize our welfare. This led to the goal, being pursued by planners all over the world, of maximizing national wealth. But those who explored the link between wealth and happiness came up with a surprising finding: growth in wealth of nations has no correlations with measures of long-term happiness (Easterlin Paradox). Deeper examination revealed that long-term happiness depends on certain character traits, as well as strong social networks. Economic policies throughout the world are focused on the wrong goals, leading to immense amount of misery for millions, simply because economists have no conception of genuine sources of human welfare. This self-inflicted blindness arises from the deadly philosophy of logical positivism.

Impact on Statistics.

The positivist stance that we can only have knowledge of SURFACE appearances has had a deep impact on the development of statistics. It is assumed that all observables can be measured and quantified. Then, analyzing the data is enough; we do not need to probe the underlying reality, to discover the hidden causes for the numbers we observe. To understand this “nominalist” position, it is useful to contrast it with:

Realism: The data is a CLUE to underlying reality which is complex, unmeasurable, unquantifiable. Use data clues to reconstruct hidden causal mechanisms and entities. The purpose of our analysis is UNDERSTANDING how hidden reality creates observed phenomena. 

We can illustrate the contrasting philosophies by a concrete example concerning Exports and Economic Growth. The DATA show that, in SOME countries, both increased rapidly (not in others). One explanation, widely favored by economists, is Export-Led Growth. That is, increase in exports is the cause of increases in rates of economic growth. A minority school has proposed that the causation runs in the opposite direction. Economic growth leads to increased production of domestic goods, making more available for exports. How can we tell the difference between the two hypotheses? Pure data analysis cannot resolve this problem because data analysis can only give us correlations, which are always symmetric. Causation is a deep unobservable property, not of the data, but of the real-world mechanisms which generate the data.

We can learn about causation by intervening in the real-world processes which are generating the data on exports and growth. For example, we could run a specialized export drive to artificially increase exports above their normal level, and look at the effects on growth. Conversely, we could make extra efforts to produce growth and see the impact on exports. In the economic domain, such experiments are difficult if not impossible to conduct, which is why it is difficult to separate cause and effect. In the physical sciences, experiments are used routinely to learn about causal mechanisms. The point here is that the causation lies in the real-world, and not in the data. The data can only show us correlations, which point to hidden causal effects. Actual interventions in the real world via experiments produces informative data confirming or rejection our hypotheses about hidden mechanisms. The observational data thrown up by the world does not actually provide us with critical information we need to separate between alternative hypotheses about the real world.

Some Fallacies Created by Positivism

We will give three examples of widely held fallacious beliefs created by positivism.

  1. The M1, M2, M2, and M4 Forecast Competition.
  2. Big Data and Machine Learning
  3. The Imitation Game.

The Forecast Competitions

The International Journal of Forecasting has run four major competitions among forecasting algorithms. They choose a collection of time series and ask forecasters to provide them with computer algorithms. Then they compare the performance of the different algorithms over their chosen 1000s of series to assess which is the best forecaster. This competition does not actually compare the algorithms correctly, because it pays no attention to the real-world processes which generate the data, as we will now show. 

Given a time series of numbers, X(1), X(2), X(3), … ,X(T) forecast the next few values X(T+1),…,X(T+K). The forecasting task makes sense only from a positivist perspective. We can try to look at patterns in the data and use them to extrapolate. But without knowing if the pattern reflects a genuine real world structure, there is no way of saying if it will persist over time. This task does not make any sense from a realist perspective. Our goal is to use data to learn about the real-world structures which generated this data. Knowledge of this structure can potentially help us in forecasting, but that is only a side benefit. The goal of a realist analysis is to get some understanding of what is hidden by surface appearances. 

To understand this better, consider the correlation between the New Hampshire Primary and the Presidential Elections in the USA. There is an unusually high correlation between outcomes of the New Hampshire Primary and the outcome of the Presidential Elections. If this is causal effect, then spending a lot of extra money on campaign in New Hampshire would be justified. If this is just a chance correlation, then this would be waste of money. How can we tell? Not by data analysis, but by thinking about the real world. The New Hampshire Primary is the first one to be held. Possibly, future elections are affected because people learn of the results, and adjust their voting patterns to match – they vote for winners. Alternatively, the correlation may result from a close match between the population of New Hampshire and the USA population at large. For some reason, the New Hampshire people are like a random sample of the whole nation, and therefore their voting patterns are closely correlated with USA patterns. Once we have real world mechanisms which reflect both correlation and causation hypothesis, we can find ways to test these hypotheses. If people adjust their votes according to New Hampshire, carefully designed surveys could reveal this information. If the representativeness of New Hampshire hypothesis is correct, this is also testable. Neither hypothesis is available on the surface, purely by a statistical analysis. Both require understanding the underlying real world mechanisms which produce the observed correlation.

Successful forecasts correctly capture the underlying real-world mechanism which generates the data. Suppose for simplicity that there are five different broad categories of real-world mechanisms which generate different kinds of data series. Suppose also that there are five different broad categories of forecasting strategies, each of which is uniquely adapted to one the five, and relatively poor at the other four. The forecasting competition selects 1000 real world data series, and invites forecasters to submit their algorithms (which are all based on one of the five categories of forecasting strategies). The forecasting algorithm which produces the lowest prediction errors wins the competition. The goal of the competition is to judge which of the algorithms is best. But what the forecast competition actually does is quite different. The success or failure of the strategies depends on the type of data series picked. If the managers of the competitions pick a greater proportion of series of type 1 to include in their 1000, this will create better performance for algorithms based on forecasting strategy 1. The outcome of the competition depends on how well the forecasting strategy is adapted to the real-world mechanism responsible for generating the largest proportion of the data series within the competition. Since real-world mechanism are never taken into account, the outcome of the competition is determined by blind chance.

Big Data and Machine Learning

If observations are the sole source of knowledge, and the only target of knowledge, then sufficient data will give us all the knowledge that is possible to have. It is impossible to analyze huge data sets by hand, so machine learning can pick up patterns which are otherwise impossible. By combining these two – big data and machine learning – we can do EVERYTHING. We can just sit back and relax, feed the data into the computers, and let them make all the big decisions. This widespread illusion is the result of the false positivist epistemology. The REAL insight is that the observations are NEVER sufficient to reveal real-world structures. One has to ADD knowledge not available on the surface to arrive at an understanding of the real world.

As a simple example, consider the question “Does smoking cause lung cancer?”. This was the subject of a huge controversy which lasted for decades. The data showed a massive rise in smoking and in lung cancer. Over same period of time, there was also massive rise in paved roads, and in industrialization. Some leading researchers seriously thought that gases and fumes created by cars and industrialization were the source of this increase. How can we tell the difference. According to positivism, we should gather more and more data Use BIG DATA and use machine learning to extract information. But realist philosophy says that we must investigate the real-world mechanisms. Such investigations showed that smoking creates 60 known carcinogens in Lungs. Properties of carcinogens were studied to show how they can cause cancer. The data provides clues to real world, which must be followed up by investigation the real world. There is no information about carcinogens and their properties in the data. That information must be obtained by autopsies of lungs, not by machine learning.

Recently, Microsoft abandoned a major artificial intelligence project to detect emotions from faces. The problem was the standard positivist one: the surface appearances are not sufficient to determine the underlying emotional states. Positivism holds the surface appearances are all we have and sufficient as a basis for knowledge. But we use a lot of contextual clues, our internal and specialized knowledge of human behavior under different kinds of stresses, to arrive at inferences about emotions. Machines without this kind of deep knowledge of human behavior would not be able to use surface appearances to arrive at correct inferences about emotional states.


Question: Can machines (artificial intelligence) be smart like humans? The answer is almost obvious. Every human being has an enormous amount of life-experience which represent a knowledge bank that is simply not possible for any machine to have. Sure, machines have massive computational capabilities which we human beings do not, but they do not have any clue about the internal life experiences that is the foundation of our knowledge. But if we think like a positivist, then all of our life experiences disappear – these are not scientific, replicable, and hence not knowledge. Positivists say that it is impossible to verify a match to the internal and unobservable knowledge. Instead, let us look at the observable implications. Can a machine ACT like a human (be observationally equivalent?). This led computer scientist Turing to suggest the following game as a test:

Imitation Game: You type in questions and receive answers from two sources A and B. Can you distinguish between the human being and the machine?

This game has actually been played for decades, and judges have successfully been able to discriminate between machines and humans, even though computers have become better at deception. But from a realist perspective this does not make any sense. If we can make a wax dummy which can deceive observers by appearing to look exactly like a human being, this does not in any way mean that the dummy is human. A match on observations is not the same as a match on the underlying reality, even though positivists think otherwise.

Concluding Remarks

Real Statistics: Numbers provide clues to hidden real-world entities and causal mechanisms. These must be combined with our knowledge about the real-world to yield useful hypotheses, never verifiable.

Positivist Statistics: Numbers are the source and goal of knowledge. We cannot look beyond the numbers to the underlying unobservable reality.

In particular, probability and causality are two central unobservables which are forever out of reach of positivist statistics. One can ASSUME causal patterns and work out consequences, but one cannot deduce causality from numbers alone.

Above is an excerpted section from the first chapter of my draft textbook entitled Real Statistics A Radical Approach. The full first chapter is available from “Life Journey“, and describes my life-experiences which led to the construction of this radical approach to statistics.

2 thoughts on “Impact of Positivism on Economics & Statistics

  1. When I was in my last class of my BA in 1969, Abnormal Psychology, the professor warned us upon graduation “NOT to reify theories.” His name was Don Morgenson and was renowned as one of the best professors in Canada.

    I asked him what it meant and he went to the blackboard —dating myself here — and wrote the word DEIFY and REIFY on two of the panels. He then went on to expound on the similarities and differences of the two words with an obvious common root.

    It boils down to “deify” being the concretization of an abstract concept called god. To make an object an icon of a god or an actual god to be worshipped. Christ and Mohammed come to mind. As do the 5000 or so other gods of human history. The statues of the alleged Jesus’ mother in churches is a more tangible example, I suppose.

    Reify was making a theory real. Acting as if the theory has been proven. Most humans do not tolerate ambiguity well so reify a theory thereby reducing ambiguity. The reified theory is absorbed into the brain as a belief and becomes an archetype that then shapes how reality is perceived by the individual. George Lakoff, the linguist, called them “frames,” I suppose. Those frames have metaphors and scenarios attached to them and are shaped into their own narratives about reality by individuals and groups of individuals.

    As Gregory W. Lester pointed out in his seminal article “Why Bad Beliefs Don’t Die” published in the 2000 Skeptical Inquirer, beliefs became attached to the brain’s survival schema thus becoming very difficult to change presumably because of the threat to survival. He wrote about how difficult it is to change a belief with logic and data that contradicts the beliefs. He also noted if I recall correctly that individuals can hold simultaneously contradictory beliefs. That may be the real problem of Logical Positivism.

    Thus, reifying theories and the ensuing psychological processes are the real obstacles to advancing knowledge and getting closer to an understanding of reality and our environment including the greater universe. It is complex and complicated with numerous negative and positive feedback loops often leading to unintended consequences.

    With respect, I see little evidence — more Logical Positivism — of Epistemic Humility in either the Muslim or the non-Muslim world. Like many things it may be taught but not to the masses. And if it were, it would have to deal with human nature which reifies theories without humility. Such humility might allow contradictory beliefs to be held and then examined but certainly in economics that is seldom the case.

    Anyways your writing stimulated some thoughts of my own and I decided to inflict them on you and anyone else who reads them!

  2. On the whole I very much like this article that supports realism. Especially the stress on causal understanding and “world structures which generate the data”. Obviously, you do not have to be a Muslim to believe in realism – most scientists probably believe in it (I can speak only of biology from personal experience), or at least they act as if they do.

    But also, not all economists espouse positivism. In fact, a large number of economists now reject (much of) “textbook economics”. And, textbooks do not always follow what the data say. A glaring example is the question of the notorious U-shaped curve of a firm’s costs. Typically, textbooks provide made-up data, without even saying the data are made up, so that they can set out a proposed mechanism that is “based” on these numbers. In fact, there has been evidence since the 1950s that the U-shaped cost curve is extremely rare (estimates range from 5 to 11% in manufacturing).

    A basic methodological point about causation: an observed statistical association, if not a chance finding, must have a causal explanation. But it need not be X causes Y versus Y causes X, as stated. A third possibility is that both X and Y have a common cause. There is an example in this article: that the population of New Hampshire happens to be close to representative of the US population in terms of voting preference. In economics, an example would be that it is not that growth causes exports or vice versa, but rather that both are consequences of a country having a lot of firms that are competitive (in the sense of being successful at competing, not in the bad usage of being close to “perfect competition”!).

    Secondly, it is not true that one cannot derive causal knowledge from data. Cigarette smoking and lung cancer is a good example. The pioneers of the research that first convincingly demonstrated this association, Richard Doll and Austin Bradford Hill, themselves believed that traffic growth was behind the observed rise in lung cancer – until they saw the data from their case control study. Later, Bradford Hill gave a list of “viewpoints” (he disliked the term “criteria”) that could be used to infer causation from correlation. A prime one was causal order – though that is problematic in economics because of expectations; there were 8 others. See his 1965 paper “Environment and disease: association or causation?” in the Journal of the Royal Society of Medicine, it is still worth reading. There is now a sub-discipline of statistics that is concerned with exactly that topic, i.e. causal inference.

    The description of what happened in this cigarettes and lung cancer example is accurate in the article. But let’s look more closely at what happened: Doll and Hill found the then-new statistical association, and their causal inference procedures led them to conclude this probably reflected a causal relationship. The possibility of this was doubted at the time, but their findings stimulated lab research on the possible pathophysiology that could underlie the observations. This mechanistically orientated research confirmed that there were plausible causal pathways, and these were subsequently confirmed. What was going on here? In modern philosophical terminology, it started with “difference making” that consistently showed a large correlation. We then had “mechanistic” evidence that was able to explain how this association was actually produced. This is corroboration, but of a particularly powerful sort: a combination of mechanistic and difference-making evidence. The history of science shows that this is the way that secure causal knowledge is generated, and that either can come first – sometimes the first thing is a suggested mechanism, sometimes it is an observed association (not necessarily statistical). These then iterate, each stimulating the other. I describe this for the germ theory of disease in my paper “Causal theories, models and evidence in economics—some reflections from the natural sciences”. I have also written on Bradford Hill’s “viewpoints” from a philosophical perspective in “Causality and evidence discovery in epidemiology” in a book called “Explanation, prediction, and confirmation”.

    One other thing: about “prediction”. It can have two senses. One is (roughly speaking) “if X then Y” – e.g. if you smoke cigarettes then you are more likely (around 10x) to get lung cancer. That is the sense routinely used in science. The other is similar to “prophecy”: what will happen in the future? Scientists do sometimes try and do this, e.g. using modelling, as in the predictions of the consequences of climate change in the future. But this is typically done using scientific causal knowledge of the “if X then Y” type – if it’s not, it’s probably not worth paying any attention to. In contrast, economists typically seem to think that their job is to make predictions/ prophecies about the future of e.g. prices – and they typically also see this as “science”, because they don’t understand that there are two distinct meanings, and fundamentally misunderstand what science is. This focus on prophecy could be because you can get very rich indeed if you can predict future prices! – one of the lecturers in my economics MSc used to say that if he were a better economist he would be rich, so he wouldn’t have to be an academic!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.