Archive

Uncategorized

I became an economist by mistake. The malicious will say that you can deduce it from the quality of my writings. I like to believe in the bizarre paths of Destiny on which the flights of human liberty stumble along.

Here I would like to link my personal experience – of little interest to the reader – to the far more interesting subject of the ongoing debate in economic science. Indeed, as is well known, particularly since the crisis began in 2007, a certain disillusionment has been growing about economists’ ability to foresee the course of events. While asking economists to foresee something perhaps pushes them into the sphere of magic to which they do not belong, there is strong discontent with their ability to explain events in progress. If the beautiful and highly formal mathematical models developed over the course of decades do not serve to predict the future – and it astonishes me that someone might believe that – they lack ex-post usefulness in interpretation. In short, they are not very useful.

Here I want to focus on the moment when I understood that something was wrong with the economics I was learning as a student. I enrolled in the economics faculty of the University of Verona in 1999, after two unsuccessful years spent working on a degree in computer science (I compensated for that lack in 2012 by marrying an Indonesian girl with a computer science degree). That choice was something of a fallback, a sort of last resort that reconnected me to my high school studies in accounting. In the spring of 2000, having to choose which exam to take, I focused on “history of economic thought”, which seemed to me to be useful for other exams. That year the department chairperson was on sabbatical, and the course was taught by Professor Sergio Noto, who still works at the University. Long story short, that course – taught as it was by Prof. Noto – was the beginning of a passion; I was struck in particular by Joseph Schumpeter, the economist to whom I dedicated my best years, and who has still not abandoned me.

Noto and Schumpeter (Austrian by birth but not of that school of thought) were my keys to entering the so-called Austrian school of economics; I will return to that later.

In addition to devouring Schumpeter, I began to stock up on books by and about the Austrian school. The experience that truly and radically changed the course of my studies was reading The Economics of Time and Ignorance, by Gerald O’Driscoll e Mario J. Rizzo.  Only after many years did I discover that it was one of the foundational books of the youngest generation of Austrian economists, particularly by those who considered themselves students of Ludwig Lachmann, but at the time I was instinctively struck by it even without being able to place it within its context.

One example in particular captured me, which I have since repeated for years to my students or in my seminars. As anyone who has studied economics knows, the textbook definition of “perfect competition” is an economic system in which the number of buyers and sellers is so high that no one is able to engage in price discrimination; everyone produces the same thing with the same characteristics, and the technology is given. The authors commented on that more or less like this: “Excuse me, but a system in which no one can discriminate on prices, the products are all just alike, and there are no technological differences – isn’t that socialism? Doesn’t the word ‘competition’ suggest something more dynamic, as in sports in which someone wins by virtue of a difference, whether it be on price, quality, technology, marketing, luck, etc.?”

For me that example was an epiphany. My microeconomics textbook – basically all of them on the market – gave a definition of perfect competition that better described the exact opposite of competition (socialism). From then on I began to study more critically, and I tried to build up an alternative understanding based on the teachings of the Austrian school of economics. Moreover, that critical approach allowed me to later construct my own personal vision within the Austrian school, and today I find myself an unorthodox person within an unorthodox school.

The important lesson I drew from that epiphany was not only to more critically approach my study; above all I remain convinced of the fact that economics is useful if it helps us understand reality. Of course some level of abstraction is necessary, but not to the detriment of its explanatory power.

In short, I am convinced that the economics in vogue, which today is primarily econometrics, reasons more or less like this: let’s take reality, empty it of the human element (that is, creativity, unpredictability, and non-determinism) and the flow of time (which is what brings novelty), and let’s build very elegant formal models where everything comes out right, because what we want to explain is already included in the hypotheses of a static model.

But what can we do with an economics without time and without people – that is, without ignorance? Precisely little or nothing.

To be continued…

This is a continuation of previous post on The Knowledge of Childless Philosophers. I would like to clarify some aspects of the theory of knowledge which have become muddled and confused because childless philosophers did not observe how children learn about the world, and acquire knowledge starting from scratch. If they had taken this as the basic model for how we acquire knowledge, they would have been able to avoid a huge number of mistakes. 

A realist methodology for science starts from the realization that scientific knowledge goes FAR BEYOND the realm of the observable. Electrons, Neutrons, Positrons have different kinds of charges, and act in different – incredible and amazing –  ways, but the link between them and observable phenomena is extremely weak and indirect. One of the readers’ comments on a previous post was: how can we learn about what we cannot observe? Paraphrase this to: how can we learn which of two slits a photon goes through, when we cannot see these events? This was exactly the question that Kant faced: How do we get knowledge which goes beyond what we can observe? It is obvious that rationalism will not provide an answer – if we start with self-evident axioms and use logic, we cannot ADD empirical information not already contained in the premises. It is also obvious that Hume’s empiricism will not work. Electrons are not there for us to see, and there is no amount of observations and experiments that we can show ordinary people which will allow them to deduce that electrons exists. Kant took ONE STEP beyond the realm of the observable to argue that our MIND supplies the (invisible structures) which organize the observations into a coherent and meaningful framework. This is what “transcendental” refers to – knowledge which transcends logic and observation. It is worth pausing to appreciate the value of Kant’s contribution. When Newton looked at the falling apple, HOW did he come to think of “gravity”? This knowledge is not there in the observation, because the same observation is routinely made by millions without thinking about gravity? In general, we can construct hypotheses about enormous spherical shells with stars pasted on them, rotating about the earth. In doing this, we are created models from our imagination, which do not correspond to observations, but serve as deeper explanations for what we observe. Kant realized that human knowledge of structures of reality was NOT based purely on logic and observations, as the rationalists and empiricist philosophers had argued. Of course, if he had observed children acquire knowledge, he would have learned this without so much difficulty. Also, seeing that children are experts at acquiring knowledge of the unobservables, he might have been inspired to write in simpler language.

My 3-year old daughter was noisily “helping” her aunt put her baby to sleep. To prevent her from waking up the baby, her aunt said “I think I hear your mother calling you”. My daughter raced out of the room to look for her mother. She found that her mother was engaged in conversation, and did not pay any attention to her coming into the room. She immediately deduced that she had been sent away from the baby and protested loudly that “My aunt sent me away from the baby” — looking through the appearances to arrive at the real cause why she had been told her mother was calling for her. Where did she acquire this knowledge, which was not part of what could be observed directly by her?

There are many examples of how children make inferences which are strongly in conflict with our imagined logics of scientific discovery. Children jump to generalizations from observing one fact, instead of patiently waiting to learn an entire collection and then deducing a law. In fact, as many psychologists have noted, children are born scientists. Three year olds learn difficult linguistic rules, and uncover hidden mechanisms at operation with ease. This is because, as Kant realized, we are born with mental structures which enable us to learn about the world we live in. There is strong evidence that emotions reflected by facial expressions are universal, so that knowledge of a range of human emotions is built into us. Babies differentiate between frowns and smiles and respond appropriately. Similarly, Noam Chomsky argued that the facility with which we learn languages shows that we are born with innate knowledge of grammatical structures. We are born with the capacity to imagine what the hidden structures of reality may be, and make good guesses about “Why” things happen.    

Where Kant went astray was in forbidding the cross-checking of these imagined mental structures with reality itself. Our mental models can be right – if they match reality – and wrong – if they don’t. Kant thought that it was impossible to check this – the true structure of reality could never be known because it was unobservable. While it is true that the mental structures we hypothesize to explain the observations can never be observed in external reality, this does not mean that there is no way to verify the existence of unobservable objects and effects. When we postulate the existence of electrons to explain some observable phenomena, we cannot go and look to see if there really are electrons. But we CAN use the hypothesis of existence of electrons to predict other phenomena that we would see if they existed. And when we see such phenomena, we get further confirmation of their existence. The empiricist REJECTION of this type of reasoning comes from the quest for certainty. Regardless of how many indirect tests we carry out, we cannot achieve the same level of certainty that we could from seeing and touching electors, and so if we confine our definition of knowledge to “Justified, True Belief”, we can never achieve knowledge about existence of electrons. If we forbid speculative talk (which was the intent of Hume and the empiricists) about objects which might exist, but about which we could never be certain, then effectively we bar talking about all subatomic particles and their properties. 

One of the key contributions of “Pragmatic” Philosophy is to give up on the quest for certainty, and settle for uncertainty in knowledge. Our entire life experience is based on navigating uncertainties, and making guesses about unobservables. Someone who sees a child growing and learning would never make the mistake of thinking that the child is acquiring mental structures of knowledge which do not correspond to structures of external reality. It would be crystal clear that the learning process involves use of hands, legs, and eyes, in addition to the brain. Furthermore, while it is clear that the child acquires new concepts – mental structures – it is also clear that increased success in navigating the world shows that these structures accurately reflect external reality.  

Instead of listening to childless philosophers, we should based our epistemology on our life-experiences in dealing with unobservables. Internal feelings in hearts of others about us are forever unobservable for us, yet the fabric of our social lives is woven from learning about and attempting to influence these feelings. Many posited unobservables have observable side-effects and implications. Many scientists were convinced of existence of atoms and molecules by Brownian motion, which could be observed, and could be explained by their presence. So to go beyond Kant, we must take our mental models seriously, as hypotheses about unobservable reality. These hypotheses often have observable implications. In fact, scientific experiments play a vital role in creating situations where the hypothesized objects and effects can be studied in isolation. Thus experiments play a crucial role in testing theories about unobservables. Such tests and experiments can never either decisively refute, or decisively  confirm, any hypothesis about unobservables. But as long as we learn to live with uncertainty, we can find near-refutations, or strong confirmations, of our scientific theories about unobservables. The Real Model – the correct unobservable true structures of external realities – is forever out of our reach, and known only to God. The progress of science, and our own life-experiences, show us that we can learn bits and pieces of it, and build on this learning, probing deeper and deeper into hidden layers of external reality.

Any study of scientific discovery will provide clear examples of how unobservable entities are discovered, and how we learn to manipulate them, even though we are unable to observe them. The discovery the certain liquids only combine in fixed proportions led to the hypothesis that these liquid were composed of molecules and that these molecules could only combine in certain fixed proportions. A diverse array experiments and phenomena could be explained by these same molecules with the same properties, giving weight to the hypothesis of their existence. An important point is that the molecular theory itself proved very fruitful as the basis of further, more complex and intricate hypotheses about the structure of matter. Those who were trained in the social product of scientific knowledge were able to make observations and experimentations of far greater sophistication than those who were not equipped with this knowledge.

Kant thought that our knowledge does transcend the bounds of observations and logic, but this transcendental knowledge is “ideal” – that is, it is projected by the mind onto reality, and it does not correspond to structures of external reality. We cannot know the true nature of unobservable external reality because we can never observe it. Advances in science substantially weakened the position taken by Kant. After Einstein’s discoveries of the counter-intuitive nature of curvature of time-space, it was not possible to maintain that time and space were projections of our mind onto external reality. Kant was also a prisoner of his time as major scientific discoveries of unobservable objects and effects had not yet taken place. It would be much more difficult to maintain that properties of atomic particles, electromagnetic phenomena, and astrophysics, are all projections of our minds, without correspondence to external reality.

As opposed to Kant, Bhaskar Roy offers us “Transcendental Realism”. That is, we can have knowledge of objects and effects which we cannot observe with our sciences. To understand scientific progress, we must make a clear distinction between “Ontology” and “Epistemology” which is very muddled and confused in nominal (non-realist) philosophies of science. In fact, nominalist philosophies often commit the “epistemic fallacy” of supposing that if we cannot know something (an epistemological constraint), then it does not exist (an ontological conclusion). A weaker form of the epistemic fallacy, but with same consequences, is to say that if we cannot observe it, then it may or may not exist, but its existence does not matter for scientific theory. Bhaskar’s Critical Realist Philosophy of science is based on clearly distinguishing between ontology and epistemology.

Ontology refers to objects and effects in external reality. The existence of these objects and effects has nothing to do with whether or not we can learn about them. Bhaskar calls them the “Intransitive” portion of scientific knowledge.

Epistemology refers to HOW we can learn about objects and effects which are unobservable. This has a lot to do with our human capabilities – our hands and eyes and ears, and our abilities to manipulate our environment, conduct experiments, construct instruments, isolate causes, etc. This portion of our knowledge is social, and must be learnt and passed on – which is why Roy calls it the “Transitive” portion of scientific knowledge. The set of experiments, inferences, and reasoning, which leads us to know about the existence of electrons and their properties is very complex, and has been acquired over centuries. If this socialized knowledge was not transmitted to the next generation, electrons would continue to exist, but we would no longer know how we can learn about their existence. It is this knowledge which accumulates, though it is subject to errors and to major revisions from time to time, as Kuhn discovered. This is inherent in the nature of uncertainty associated with this knowledge.

In the next post we will consider how we can go beyond the observable data in econometric analysis to extract deeper information not present in the surface appearances, which are just the correlations.

 

 

 

 

Continuing from the previous post on The WHY of Crazy Models, I attribute a large portion of the blame to massively wrong theories of knowledge. A little bit of study of epistemology is enough to give anyone a headache. Because of this, instead of investing the time and effort to decipher what the philosophers are saying, the rest of us are willing to take it on faith. No one is aware of the massive amount of damage done by philosophers – most philosophers themselves are unaware the tremendous influence that their failures in the past have had on the real world. Similarly, the non-philosophers are unaware of how deeply their thoughts have been affected by false and obsolete philosophies, now rejected by the philosophers. Keynes summed up the state of affairs nicely in his apt quote:  “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back” –  While Keynes thinks that practical men of affairs are slaves of economists, I believe that the economists and social scientists are slaves of defunct philosophers, without realizing this.

In preparing this post, I decided to review the theories of knowledge, to provide a brief sketch of how epistemology went astray, allowing us to create crazy models and to consider them as an advance on knowledge. I found a dismal historical record of one completely bizarre theory of knowledge opposed by another equally bizarre theory, and the conflicting theories are synthesized into yet another monstrosity by yet another big-name philosopher. Reading through this stuff led me to wonder: Did any of these philosophers have children? Anyone who watches a child grow and acquire knowledge would automatically avoid the monstrous mistakes made by these philosophers. A little research turned up the following amazing fact: Hobbes, Locke, Hume, Adam Smith, Descartes, Spinoza, Leibniz, Kant and Bentham all went unmarried. These are all the big names who have shaped Western philosophy in general, and the theory of knowledge in particular. We would all have been much wiser if only mothers, who have intimate knowledge of how children learn, would have been permitted to write about how human beings acquire knowledge, and what counts as knowledge. It is too late to implement this rule as the damage has been done, but perhaps burning the books of philosophers without children and expunging them from our collective memories might help make the world a better place. Sigh!

Coming back to the dreary task at hand, of reviewing blunder after bigger blunder in the theories of knowledge, I should acknowledge that it is impossible to cover centuries of speculative philosophy in a few paragraphs. For a serious book length effort, see Manicas: The History and Philosophy of the Social Sciences. Some quotes from his introduction summarize the points that I wish to make:

  1. the very idea of science is contestable“.  This is significant because modern European theories of knowledge all originate in the rejection of Christianity in the West. When Europeans realized that masses could be deluded into believing in false Gods, they decided to study intensively what should be counted as knowledge, and how it can be acquired, so as to avoid such mass deceptions in the future. The collective decision was made to take “science” as the model for valid production of knowledge. See my article on The Deification of Science, and its Disastrous Consequence for more details about this.
  2. There was, for a very long time, a very stable notion of ‘science’, and that this very stable idea of science has been the point of departure taken for granted by all parties“. All Western theories of knowledge (from the post-Newton era to Kuhn) were based on the starting point that “Western science” is knowledge – how do we develop an epistemology which can prove this?
  3. it is only very recently that radically different understandings of the nature of science have become serious alternatives.” T. S. Kuhn is a landmark here, as his historical studies dramatically altered the image of science. Also of great value in this connection is Chapter 1: The Heroic Image of Science in Appleby, Hunt, and Jacob “Telling the Truth About History”.
  4. The upshot is the possibility of a thoroughgoing revolution in the received ideas of science, natural and social. … disastrously, the social sciences (are) based on a misconception about what the physical sciences are.” The necessity of inventing a philosophy which makes “science” the only valid source of human knowledge created a hugely distorted theory of knowledge. The social sciences were created on the basis of this misconception, and as a result have fundamentally flawed foundations.

While it is impossible to provide a brief sketch of the twists and turns in Western epistemology, we can identify three flawed schools of thought, which are described below. Amazingly, modern economic methodology is based on accepting the central defects of all three opposing philosophies — this must be seen as a tremendous feat, made possible only by deep ignorance of the philosophical backgrounds. The three schools of thought about human knowledge, together with their flaws, are briefly described below:

  1. The Rationalist School – (Descartes, Spinoza, Liebniz) – These philosophers wanted to derive all knowledge from reason. Start with incontrovertible hypotheses (axioms) and use logic to derive certain conclusions. Kant noted that one can only get analytic truths by this method. The conclusions are logically contained in the premises, and nothing new can be added. Synthetic truths, which depend on examination of external reality, cannot be deduced from an axiomatic approach.
  2. The Empiricist School – (Locke, Berkeley, Hume) – These philosophers thought that our observations (sensory impressions) are the sole source of reliable knowledge. Against them, Kant argued that there are many things we know about external reality (such as causality) which cannot be observed.
  3. Transcendental Idealism: Kant noted that science was impossible on empiricist or rationalist basis for knowledge. One cannot discover (synthetic) truths about external reality, starting from self-evident axioms and applying deductive logic. Also, many important structures of external reality, essential to science,  are not part of the observational data, and indeed, are inherently unobservable. As a solution, he proposed to equip the mind with powers to organize inchoate sense data into a coherent picture of reality.  

All three of these philosophies of human knowledge are mistakes. All three mistakes are incorporated in the methodology currently in use in economics. The Rationalist mistake is that a hypothetico-deductive system cannot generate knowledge which is not already contained in the axioms. Thus, such a methodology is incapable for learning from experience. As noted by Manicas, the social sciences are based on a misconception about what the physical sciences are.  This (Rationalist) misconception about scientific theory is explicitly stated by Lionel Robbins, the founder of the modern positivist approach to economics:

The propositions of economic theory, like all scientific theory, are obviously deductions from a series of postulates. And the chief of these postulates are all assumptions involving in some way simple and indisputable facts of experience…” (See past post on Methodology of Modern Economics).

The second mistake is the Empiricist misconception that observables are part of science, while unobservables cannot be. A simple illustration of this is the attempt to reduce preference to choice under the misconception that the invisible preferences of our hearts cannot be part of a scientific theory. For a sketch of Samuelson’s mistake in equating choices with preferences see my post on Foundations of Probability 7.  Denial of the existence of uncertainty (as opposed to risk) has been one of the more disastrous consequence of denying the the existence of the underlying preferences (and differentiating them from the choices which are guided by them, but distinct from them). For a detailed explanation of this, see my sequence of posts on the  Foundations of Probability .

Denial of the unobservable is the “epistemic fallacy“, or the ostrich fallacy in less polite terms. If I cannot see it, it does not exist. Anyone who reflects on the nature of science will come the realization that scientific theories depend deeply on unobservable objects and effects. Kant realized that scientific theories provided us with knowledge which was neither “empirical” (facts we can perceive by sensory experience) nor “rational” (logical deductions from self-evident axioms). His mistake was a sophisticated version of the epistemic fallacy. He argued that if we cannot observe it, we can ignore it (bad advice for the ostrich).  He argued that even if reality has complex unobservable structures, we have no access to these structures. So, the unobservable structures that we imagine to be a part of external reality (like causality, persistence of objects in time and space, and many others) are actually projections of our mind onto the observable reality (It seems that ostriches have read Kant). These complex structures are “ideas” in our minds, and have no correlation with external reality.

This third (Kantian) mistake is manifest in the vast majority of models created by economic theories. Arrow and Debreu imagine a world of consumers and commodities where frictionless trade takes place across time and space.  Economists feel free to imagine that we are all engaged in a game with rules and payoffs that they can make up, as long as the outcomes calculated by theoretical means correspond to the observable. The fact that these rules and payoffs exists only in the mind of the theorist, and they have no correspondence to any structures of external reality is of no importance.

Philosophers have made substantial progress in their understanding of the nature of science. Bhaskar’s Critical Realism is able to account for two aspects of scientific knowledge which previous philosophies could not. One is the social nature of science, and the second is the depth of discoveries about unobservable objects and effects in external reality. The propagation of knowledge across disciplines in the social sciences appears to take place with enormous lags. Economists are still using methodologies based on philosophies of science which were discarded and forgotten by philosophers a long time ago. Progress in economics requires abandonment of obsolete philosophies, but the task is made much more difficult by the fact that these philosophies are buried in the foundations of how the subject is formulated and presented to students. These are passed on from generation to generation, and remain unexamined, and unquestioned.

 

 

 

 

 

I was professionally trained as an economist, and learned how to build models with the best. As described in detail in a previous post on The Education of An Economist, it was only by accident that, a long time after graduate school, I learned of glaring conflicts between the theory I had been taught, and the historical evidence about effects of free trade and trade barriers. Further exploration along this direction dramatically widened the chasm between the economic theories I had learnt, and the historical and empirical evidence all around me. This led me to a set of puzzles which I have been struggling with for the past two decades. [1] Why is that economists are not aware of the conflict between economic theories and empirical evidence? [2] Why is it that economists do not care, when such conflicts are pointed out to them? In Trouble With Macro, Romer expresses these same two points as follows:  “The trouble is not so much that macroeconomists say things that are inconsistent with the facts. The real trouble is that other economists do not care that the macroeconomists do not care about the facts. An indifferent tolerance of obvious error is even more corrosive to science than committed advocacy of error.

Once we move from the easy-to-establish fact that economists use crazy models, to the much more difficult meta-question of WHY economists use crazy models, one apparently obvious answer suggests itself: it is because economists are crazy. A referee once accused me of thinking that economists are blinkered idiots. Actually, from close association with the tribe, I know that some of the best and brightest human beings are economists.  This intensifies the puzzle: how can some of the smartest people believe the stupidest theories?  To pick some of the most flagrantly stupid theories:

  1. Economists believe in rational expectations, even after the Global Financial Crisis took the entire profession by surprise. This requires not believing in uncertainty, despite overwhelming evidence to the contrary — see  Foundations of Probability 1
  2. Economists believe in utility maximization, despite a huge amount of empirical evidence against it (see  Behavioral vs Neoclassical Economics).
  3. Economists continue to believe in, and use, DSGE models, considered crazy by Solow.  Even sympathizer and supporter Blanchard acknowledged that these models  make assumptions profoundly at odds with what we know about consumers and firms. (see Quotes Critical of Economics).

How can someone in possession of his senses declare that it is midnight, when a bright sun is shining overhead? A powerful hypothesis is an ideological bias, as many have argued. However, from personal experience, most economists I know are not ideologically wedded to capitalism. It is not ideological commitments or class interests which drive the lunacy embedded at the heart of economic theory.

Instead of specializing to economics and economists, I began to think about the broader question: Why do people come to believe in theories (true or false)? Pondering this question led me to a startling realization. I had been conditioned to believe in positivism: We acquire knowledge by observing empirical evidence and formulating theories in the light of observations. The fact is that the body of human knowledge is the work of millions of scholars over course of centuries. No one man can hope to acquire more than a tiny fragment of this knowledge. Even the brightest mathematical genius, deprived of the heritage of human knowledge, and left to his own devices would be unable to progress much beyond third grade mathematics with a lifetime of work. So we have no choice but to take at face value, and accept without question, the vast portion of what we learn. This insight was expressed clearly by Kuhn in his study of the Structure of Scientific Revolutions. Scientists are trained to dogmatically believe in a paradigm, without questioning the fundamental methodologies which they learn like an apprentice. Similarly, economists learn how to do economics by examples, without any discussion of the methodological frameworks within which they are creating their theories.  The implication is that the vast proportion of knowledge that we have is inherited and unexamined knowledge; it has to be this way, because our lifetimes are too short to enable us to examine and verify the enormous structure of existing human knowledge. If massive errors have been made in our intellectual heritage, we will generally accept these without question.

Taking it to a personal level, I was taught a theory of knowledge, and a methodology of acquiring knowledge, without any explicit discussed of either of these topics. The textbooks that I studied made the assertion that what is contained in this book is knowledge (without explict statements to this effect). Similarly, the methods used to construct this body of knowledge — mostly theorem-proof, but with some others thrown in eclectically, as needed — were what I learned about methodology. I was not offered any choices about theories of knowledge via  discussion of epistemology, and I was not offered a choice about alternative methodologies. I started to question these methods only after I came to the realization that this body of “knowledge” was deeply flawed, and the “methodology” I used to arrive at “truth” often led to stark falsehoods. The problems with “Western knowledge” as it developed over the past few centuries are too numerous to list, but perhaps the root of all the problems is the (mis)conception of “objective” knowledge.

Objective knowledge is knowledge which has been detached from the knower – the subject who is in possession of this knowledge. Realizing that this cannot be done would lead to a revolution in the theory of knowledge (for an extended discussion, see The Illusion of Objective Knowledge). To take small steps, consider the difference between the “empirical” and the “actual”, as introduced by Roy Bhaskar in his philosophy of science known as Critical Realism. The empirical is the sense data that we perceive, and the actual is what is really out there in external reality. It is obvious that there are actually “trees” out there in external reality, but at the same time, our only access to these trees is via our five senses — we do not have any direct access to external reality, only to our perceptions. Because of differences in perspective and time, no two observers of the same tree will ever actually have exactly the same sensory experiences — the ’empirical evidence’ for the tree, and a description of its appearance, will vary radically from observer to observer and also vary radically with time and position for a single observer. However the “actual” tree is  much more stable as an entity – although it too grows over time.

Just as there is a difference between what we observe, and the actual object in external reality, so there is a chasm between knowledge that I acquire which is subjective knowledge belonging to the subject (me), and objective knowledge – a disembodied entity, the real, indisputable, and objective truth, invariant across time, space, and independent of the observer. Three central illusions of Western epistemology are:

  1. There exists of body of OBJECTIVE knowledge – universal truths independent of observers. Also, we CAN aspire to get this knowledge.
  2. We can filter out our subjective imperfections to distil and separate the perfect objective truths (facts) from our imperfect subjective analyses (opinions).
  3. The “scientific method” is the (only) methodology which can be used to reach objective truths, and this is the only type of knowledge worthy of the name.

Clarity is achieved by considering the polar opposite positions. For a long and detailed explanation, see Hilary Putnam: Collapse of the FACT/VALUE Distinction and other essays.  At least as an idealization, we admit the existence of objective fact, and also of purely subjective opinions. But the vast majority of human knowledge consists of a mixture of the subjective and the objective in a way that the two are “inextricably entangled”. Another way to say this is to say that human knowledge consists mainly of our life experiences, which are quintessentially non-scientific. This is because our life experiences arise from our personal interactions with external reality – this interaction mixes our subjectivity (opinions, emotions, identity)  with the objective (facts, social realities, history, geography).

How does this help to solve the puzzle of how intelligent people can come to believe in nonsensical theories? Considerations discussed above provide one important piece of the puzzle — other pieces will be discussed later, separately. Modern Western epistemology teaches us that the only knowledge worthy of the name is objective and universal knowledge. But actually, nearly all of the knowledge we have is our personal subjective life experiences. “Methodology” then becomes the name of the process whereby our personal life experiences can be converted into universal truths, so that it counts as knowledge. Timothy Mitchell (2002, Rule of Experts) writes that: “The possibility of social science is based upon taking certain historical experiences of the West as the template for a universal knowledge. Economics offers a particularly clear illustration of this.” The process by which economists proceed is to take some “axioms” of human behavior as self-evident universal truths (even though they are actually false). Then they proceed to build models of the economy on the assumption that all human beings follow these rules of behavior. It is widely recognized that the process of modeling involves populating the economy with mechanical robots guided by mathematical rules, and computing the outcomes. The fact that this process leads to models disastrously in conflict with reality is ignored. This aspect, of how it became acceptable and fashionable to ignore reality, requires further explanation.

We started with a sequence of six posts on the nature of economic models, meant to clarify why economic models ignore reality. These six posts are: Mistaken Methodologies of Science 1Models and Realities 2 , Thinking about Thinking 3, Errors of Empiricism 4, Three Types of Models 5, and Unrealistic Mental Models 6. This sequence is to be continued with further detailed explanation of the nature of economic and econometric models. To understand the difference between observational models as used in econometrics, and real structural models, we need to introduce and explain causality and how it affects analysis, even though econometricians ignore it. This is done in a sequence of 5 posts on Simpson’s Paradox. Next, we will review some critically relevant aspects from my paper on  Methodological Mistakes and Econometric Consequences, before coming back to my main theme on the nature of economic models, as pursued in the first 6 posts.

 

This is the fifth and last of a sequence of 5 post on Simpson’s Paradox. In previous posts, we have discussed the paradox in the context of college admissions and for batting averages. In this post we discuss how Simpson’s Paradox works when evaluating the effectiveness of drugs in treating diseases. The paradox takes the form that the drug seems to work for the population – recovery rates are higher for drug takers and lower for those who do not take the drug. HOWEVER, when we divide the population into subpopulations, we may find that the drug is bad in ALL subpopulations. For example with subpopulations being males and females, we may find that the recovery rate of females who take the drug is lower than the recovery rate of females who did not take the drug. Similarly, the drug lowers the recovery rate for males as well. So the paradox is: the drug is BAD for females, and BAD for males, but good for the general public (without reference to gender). How can this be? Understanding this paradox required working through the causal structures underlying the data.

Simpson’s Paradox for Drug Recovery Rates.

We now present another example of Simpson’s Paradox which brings out some other kinds of causal chains. Suppose a new drug is being tested as a treatment for a disease. One group of patients, known as the “Treatment Group” is given the drug. A second group of patients, known as the “Control Group” is not given the drug. We find that the recovery rate from the disease is 56% in the Treatment Group, and only 44% in the Control Group. Thus it seems that the drug is beneficial; it increases the recovery rate from 44% to 56%. However, when we break down the Treatment and Control Group by Gender, we find rather different conclusions

This drug seems to be good for the population as a whole – it increases recovery rates from 44% in the Control group which did not take the drug, to 56% in the Treatment Group which took the Drug. But when look at Males separately, we find that among Males, the recovery rate was 60% in the treatment group, and 80% in the control group. Taking the drug REDUCED the recovery rate among males from 80% to 60%, causing significant harm. Similarly, for Females, taking the drug REDUCES recovery rate to 20% from 40%. This leads to the Simpson’s Paradox. The Drug is GOOD for the population as a whole, but it is BAD for males, and it is BAD for females! How can this be? A Causal Diagram can help us to understand this paradox.

DrugGender

Taking the Drug or Not Taking the Drug is a causal factor for Recovery, as the arrow shows. But Gender is ALSO a causal factor for recovery. Being female leads to POOR chances for recovery (20% with drug or 40% without drug). Being Male leads to BETTER chances for recovery (60% with drug and 80% without drug). In the population as a whole, recovery rates in the treatment group are affected by TWO factors, Gender and Drug. Taking the drug LOWERS the recovery rate, but having a high proportion of males INCREASES the recovery rate in the Treatment Group. In the Control Group, large proportions of female LOWERS the recovery rate, so that it seems that the Treatment is beneficial. Actually, the drug is harmful, but this harm is concealed by the high proportion of males, which increases the recovery rate in the treatment group.

This is a classic case of CONFOUNDING. GENDER is a confounding variable. It is EXOGENOUS – not determined by either taking the drug or by recovery rates. It affects both the choice of whether or not Drug is taken, and also affects the recovery rates. Women are very likely to NOT take the drug  (1800 W vs 200 M) in the control group, while Men are very likely to take the drug (1800 M vs 200 W) in the drug treatment group. The standard REMEDY for confounding is to CONDITION on the confounding factor – that is, hold it constant, to prevent it from affecting the recovery. Once we hold gender constant, we find the effect of the DRUG ONLY (purged of gender effects) on the recovery rates. We then find that the drug LOWERS the recovery rate in both males and females and therefore is harmful for everybody. The apparent beneficial effect comes from the GENDER effect on recovery – putting in a lot of males into the Drug Treatment group makes it seem as if the drug is having a beneficial effect. In fact, males have good recovery rates relative to women, so having more men is the cause of the higher recovery rate in the drug treatment group.

To see how changing the causal sequencing can completely change the analysis, we consider the same data set for drug treatment, but replace GENDER by Blood Pressure. While Gender is obviously exogenous and cannot be affected by taking drugs, the blood pressure CAN be affected, and so it is not necessarily exogenous. Now consider the following data (same numbers as in the previous example with Gender)

Here we have a situation where the drug in the overall population increases recovery rates from 44% to 56%. However, if we split the population into two types – those with normal blood pressure, and those with low blood pressure – then a different picture emerges. In the subpopulation with low blood pressure, recovery rates are high without the drug, and taking the drug REDUCES the recovery rates. The same thing is true of the normal blood pressure population.   This is a case of Simpson’s Paradox, but the causal sequencing is very different, and therefore the data analysis is very different. Whereas gender is exogenous in the previous example, because gender cannot be affected by drugs, the Blood Pressure is endogenous – it is affected by the drug. Thus the causal diagram is now the following:

DrugBP

Because the Blood Pressure is not an exogenous variable, it is NO LONGER a CONFOUNDER. Instead, the drug action is MEDIATED by the blood pressure. That is, the strength of action of the drug is partially related to blood pressures, and the drug also affects the blood pressure. To understand the causal picture correctly, it is useful to consider a SIMPLER example, where the drug acts SOLELY through the blood pressure and has no direct effect on recovery at all. Suppose that in people with normal blood pressure, the disease Is deadly with recovery rates of only 40%. However, among the population with LOW blood pressure, the recovery rate is very high at 80%. Low Blood Pressure creates a strong protective tendency against this disease. Noting this, suppose that doctors recommend a drug which lowers blood pressure (but has NO OTHER effect).  The causal picture for this setup would be:

DrugBP2

In normal population, 90% of people have normal blood pressure and 10% have low BP. Recovery rate is 40% among the normal population and 80% in the Low BP so total recovery rate is 90% x 40% + 10% x 80% = 36%+8%=44%. The drug for blood pressure lowers BP in the normal BP people so that if everyone takes the drug, then 90% get low BP, while only 10% are not affected by the drug and maintain normal BP. After the drug is given, the recovery rate becomes 90% x 80% + 10% x40% = 72% + 4% = 76%. So the drug, which has no direct effect on the disease, works by lowering BP and is highly effective. The general population recovery rate of 44% is increased to 76%.

To match the numbers of our table, and to explain the WHY of Simpson’s Paradox, we need to consider a more complicated situation. Suppose the drug lowers Blood Pressure, which helps to increase recovery rate, as before. BUT suppose the drug itself has a toxic side effect. The drug reduces recovery rates from 40% to 20% among the normal blood pressure population. It also reduces recovery rates from 80% to 60% in the low BP population. Now our table matches the first causal diagram, and has the following interpretation. The drug has a harmful direct effect on recovery. However, the drug lowers the Blood Pressure, and this has a highly beneficial effect on recovery. The sum of the two effects is positive so that recovery rates after drug treatment go up to 56% from 44%.

Note the dramatic difference in analysis between the two cases. When GENDER is the confounder, then the right result is obtained by CONDITIONING on the confounder, and considering the rates separately in the two subpopulations of males and females. Then we come to the conclusion that the drug is harmful. When BP is a CHANNEL for the action of the drug, then BP is no longer a confounder, and we come to the conclusion that the total effect in the general population is the right measure, and so the drug is actually beneficial overall.

It is of great important to realize that the numbers stay the same throughout all of these analyses. It is the STORY behind the data, the HIDDEN real world structures which generate the data, which change. The causal path diagrams ARE maps of these hidden real world structures. Central econometric concepts of exogeneity and endogeneity, as well as confounding and whether or not to condition on a confounder, all depend on the hidden causal structures. Conventional econometric analysis ignores these causal structures and hence generally comes to the wrong conclusions based on superficial analyses.

This is end of our sequence of posts on Simpson’s Paradox. The goal of these posts was to explain how hidden real-world causal structures, which are not captured in observable data, can nonetheless dramatically affect the data analysis. Exactly the same data set can convey radically different messages depending on differences in the causal structures which generated the data. The message is that we must re-build econometrics from the ground up. We must FIRST explicitly introduce causal structures, and then SECONDLY do data analysis conditional on the causality assumptions. It is impossible to do reliable data analysis without having a clear picture of the causal sequences underlying the data. Econometricians have avoided doing this, because positivist prohibition of investigating, or even talking about, unobservable structures. Causality is fundamentally unobservable, as was already noted by Hume. Nonetheless, despite being intrinsically unobservable, it does have strong implications for our data analysis, and cannot be ignored.

One of the key reasons for the dead-end we face in econometric analysis (and also in statistics) is the idea that analysis of numbers can be separated from the real-world meaning and context of the numbers. Positivists ideas have been absorbed by the public, without conscious realization of this. When people say: “Just give me the facts, I don’t want your opinions”, they think they are stating a commonplace and trite truth. They do not realize that this sentence is an advanced conclusion of complex philosophical argument about the dual nature of knowledge,  which is fundamentally unsound. The “facts” – the numbers – and the “opinions” (the guessed-at causal phenomena which generated the numbers) – cannot be neatly separated, and analysis of the facts REQUIRES the guesses at causal structures. The previous set of posts on  Simpson’s Paradox  (1 , 2, & 3)  illustrated the importance of learning about the causal structures in the context of studying discrimination against females in admissions at Berkeley. Next, we will take the SAME set of numbers, the same data, and pretend that it comes from batting averages. We will see that analyzing batting averages which generate a Simpson Paradox leads to different considerations regarding causal structures. The hidden and unobservable real world causal structures which generate the observable data cannot be ignored in statistical analysis, even though it is customary to do so in standard textbooks of econometrics and statistics.

Simpson’s Paradox in Baseball Scores. 

One of the central assumptions of orthodox statistical methodology is that we can do analysis of numbers without knowing their origins. The mean, median, and mode can be calculated for any list of numbers BUT the meaning of these measures depends strongly on the real-world objects which are being measured by the numbers. The orthodox model has a statistical consultant who works with a field expert. The field expert knows the causal relationships, but the statistician look only at the numbers, with minimal knowledge of what they measure. In fact, we will show that statistical analysis requires real-world knowledge and cannot be separated from the field analysis. To illustrate this principle, we consider the same numbers used for Berkeley admissions, but give them another interpretation, in the context of batting averages of baseball players.

Consider Tom and Frank, two batters who have batting averages of 56% and 44%. On the basis of these numbers, it seems clear that Tom is the better batter. At a critical moment, when the team needs a hit, the coach should send out Tom to bat, as Tom will have a higher probability of getting a hit. However, an analyst looks at the hit record more deeply, dividing the batting average according to type of pitcher: Left-Handed or Right-Handed. This leads to the following numbers.

While Tom has better over-all performance with a batting average of 56% compared to 44%, when we break it down by pitchers handedness, a different picture emerges. Frank is better against Left-Handed Pitchers, averaging 80% hits in comparison to Tom’s 60%. Similarly, Frank is better against Right-Handed Pitchers, averaging 40% against Tom’s average of only 20%. Again, we have a Simpson’s Paradox. Frank is better than Tom against Left-Handed Pitchers, and also against Right-Handed Pitchers. But in general, against all pitchers, Tom is better than Frank. How can that be? This is a classic case of “confounding”. We can illustrate confounding by a causal diagram.

BatAvg

The batting average depends on the batter performance. It also depends on the mix of left and right-handed pitchers faced by the batter. Frank’s average can vary from 80% to 40% depending on the proportion of left and right-handed pitchers that he faces. Similarly, Tom’s average can vary between 60% and 20% depending on the Mix of Left/Right pitchers that he faces. The batting average depends on two different factors – one is the batter performance and the second is the nature of the field. To evaluate batter performance, we must eliminate the confounder. One way to do so is to condition on the confounder – that is to hold it constant. This means that we should condition on Left-Handed Pitcher, and separately on Right-Handed Pitchers. Doing so leads to a clear conclusion that Frank is the better batter. If both players face the SAME proportion of left-and right-handed pitchers, then Frank will definitely do better than Tom. As long as the MIX of Left and Right Hand Pitchers is EXOGENOUS – that is, it is determined without reference to the variables under study  — then the coach should send out Frank. HOWEVER, it is also possible that the MIX is an ENDOGENOUS variable. This can happen in the following way. Frank is an exceptional batter, and has an amazing record of 80% hits against left-handed pitchers. This average declines to only 40% against right-handed pitchers. The coach of the opposing team is aware of this weakness of Frank, and switches to using right-handed pitchers when Frank comes to the bat. Thus, the normal mix of pitcher is 90% Left-Handed and 10% Right-Handed which is what Tom will see. But when Frank is sent to bat, the coach will change pitchers to heavily favor right-handed pitchers, so that Frank will see a mix of 90% Right-Handed Pitchers with only 10% Left-Handed pitchers. The causal diagram is now different:

BatAvg2

With this causal sequencing, the Left/Right Mix is NO LONGER a confounder, because it is no longer an EXOGENOUS variable. The MIX is INFLUENCED by the Batter. When we want to compute the batting average, we have to take into account the direct effect of batting ability, and also the indirect effect created by the fact that the choice of batter influences the opposing coach’s choice of the MIX between left and right handed pitchers. Taking both into account, we see that the coach should now prefer to send Tom into the field, even though Frank is the better batter. This is because the opposing coach will respond the choice of Frank by changing the pitchers, and with the changed pitcher mix, Frank will actually perform worse than Tom. One important lesson from this analysis is that knowledge of whether or not a variable is endogenous or exogenous depends on knowledge of real-world structures not directly observable in the numbers. We need to know whether or not the opposing coach looks at the batter to decide on the mix of pitchers he will use. Against ANY FIXED MIX of left and right handed pitchers, Frank will do better than Tom. However, is the mix is CHANGED and CHOSEN to face Frank with right-handed pitchers, against whom he is weak, then Frank will perform worse than Tom.

 

 

 

The previous post (2-Simpson’s Paradox), provided an explanation of how all departments of a university could provide preferential admissions to females, and yet males could achieve higher admission rates than females in the university as a whole. This post provides further alternative hidden causal structures for the same data set, which could radically alter the interpretations given earlier.

The central point we are trying to make here is that understanding data REQUIRES understanding deeper causal structures which generate the data, and these causal structures are not DIRECTLY observable. A large set of philosophers of science have committed Kant’s Blunder of forbidding us to investigate the unobservables. For example, Hume wanted to burn speculative writings. Wittgenstein dogmatized that “whereof one cannot speak, thereof one MUST be silent”. This prohibition has strongly inhibited exploration and discussion of causal structures in econometrics. Nonetheless, it is essential to understand causal chains, for correct data analysis. In this post, we will explore a large number of variant causal structures, which give similar observable outcomes but have radically different policy implications. In the previous post, we discussed the causal structure posited by Berk to explain the Simpson’s Paradox in Berkeley admissions. Better understanding is achieved by positing an even simpler causal structure which leads to similar paradoxical results:

Chain

The above causal diagram depicts the case where there is no discrimination by gender. Suppose that the Engineering Department Admits 60% of all applicants, regardless of whether they are male or female. Similarly, Humanities admits 30% of all applicants regardless of whether they are male of female. So the Admit Rate depends on the Department, but not on gender.  Now suppose that 90% of females apply to Humanities and 10% apply to Engineering. Then the overall admit ratio for females will be 10% x 60% + 90% x 30% = 33%. Suppose that 90% of the males apply to Engineering, and only 10% to Humanities. Then the overall admit ratio for males will be 90% x 60% + 10% x 30% = 57%. So it will appear that admissions are heavily biased in favor of males. 57% of the males who apply get in, while only 33% of the females get in. In the causal chain diagrammed above, Gender affects Department Choice, and Departments Chosen Affect Admission Rates. So, Gender will affect Admissions through the channel of Department Choice.  In this situation, we can learn the gender plays no direct role in admissions by conditioning on the Department. The link between Gender and Admissions is broken (or blocked) when we condition on Department. After conditioning on “department”, we will learn that Gender is independent of Admissions. Conditioning means doing the analysis separately for each department, holding department choice constant.

There are many other causal sequences which can create outcomes resembling the original ones, but with causes entirely different from those discussed until now. For example, suppose that the admissions process is completely mechanical, and depends only on the SAT Scores. Suppose that SAT scores vary from 20 to 80, and the admissions process is such that those with SAT scores x have an x% chance of admission. We consider an artificial example first, because it is very easy to understand. Suppose that females get scores of only 80 and 40, while males get scores of 60 and 20. Suppose that all females with score of 80 apply to Engineering and all females with scores of 40 apply to Humanities. Similarly suppose that all males with scores of 60 apply to Engineering and all males with scores of 20 apply to Humanities. If the proportions are set up to match the initial example – 200 females with SAT score 80 apply to Engineering, while 1800 males with SAT scores of 60 apply to Engineering. Similarly, 1800 females with SAT score 40 apply to Humanities and 200 males with SAT scores 20 apply to Humanities. This will create data identical to the first table. However, neither Gender nor Department has any effect on admissions, which is determined purely by the SAT Scores.

Here the Gender affects SAT scores; females get higher scores. Gender also affects choice of department. Females apply overwhelmingly to Humanities, which is more difficult to get into. However, Gender does not affect admission rates. Admission rates depend only on the SAT Scores – not on gender, nor on department .  The causal picture in this situation looks like this:

DAG4

 

Both Gender and SAT Scores affect choice of Department, since females with high scores opt for Engineering, while those with low scores opt for Humanities.l Admissions depend ONLY on the SAT Scores, and are blind to Gender, and not affected by Department. However, the data will be exactly the same as that analyzed in the original data table illustrating the Simpson’s Paradox. Gender and Department both appear to be strongly related to admit rates, with Engineering having easier admissions and Females being favored by both departments. All these misleading relationships disappear when we draw the right causal diagram, as above. We can discover the validity of this diagram by conditioning on the SAT Score. If this diagram is valid, then after conditioning on the SAT Score, Gender and Department will be independent of the admit ratios.  The observable structures DO NOT reveal the hidden underlying causal chains, and create a misleading picture. This is is why an empirical approach, which refuses to go beyond the observables, is bound to fail.

The point is that a superficial analysis, which only looks at the numbers, without attempting to assess the underlying causal structures, cannot lead to a satisfactory data analysis. David Freedman said that we must expend shoe leather in order to understand causality. That is, we must go out into the real world and look at the structural details of how events occur. To find out about whether or not discrimination occurs, we should examine how the admissions committee works – who is on the committee, what are their opinions regarding gender-blind admissions, and the procedures used to make admissions decisions. The idea that the numbers by themselves can provide us with causal information is false. It is also false that a meaningful analysis of data can be done without taking any stand on the real-world causal mechanism. Each of the diverse causal mechanisms discussed above has radically different implications regarding whether or not there is discrimination by gender, and how we should go about rectifying the problem, if there is a problem to rectify. These issues are of extreme important with reference to Big Data and Machine Learning. Machines cannot expend shoe leather, and enormous amounts of data cannot provide us knowledge of the causal mechanisms in a mechanical way. However, a small amount of knowledge of real-world structures used as causal input can lead to substantial payoffs in terms of meaningful data analysis. The problem with current econometric techniques is that they do not have any scope for input of causal information – the language of econometrics does not have the vocabulary required to talk about causal concepts.

In the remaining portion of this discussion, we will look at the same data set in some other real world contexts, where the same numbers lead to radically different conclusions. See (4-Simpson’s Paradox: Baseball Batting Averages, and 5-Simpson’s Paradox:Drugs and Recovery Rates) This goes against a standard assumption that statistical analysis can be done by looking at numbers, while the “real-world” context and interpretation can be left upto the field expert. We cannot separate the data from its real world meanings and context.

Postscript: For a summary of all five posts, with links to each one, see RWER Blog Post on Simpson’s Paradox.