4-Simpson’s Paradox

One of the key reasons for the dead-end we face in econometric analysis (and also in statistics) is the idea that analysis of numbers can be separated from the real-world meaning and context of the numbers. Positivists ideas have been absorbed by the public, without conscious realization of this. When people say: “Just give me the facts, I don’t want your opinions”, they think they are stating a commonplace and trite truth. They do not realize that this sentence is an advanced conclusion of complex philosophical argument about the dual nature of knowledge,  which is fundamentally unsound. The “facts” – the numbers – and the “opinions” (the guessed-at causal phenomena which generated the numbers) – cannot be neatly separated, and analysis of the facts REQUIRES the guesses at causal structures. The previous set of posts on  Simpson’s Paradox  (1 , 2, & 3)  illustrated the importance of learning about the causal structures in the context of studying discrimination against females in admissions at Berkeley. Next, we will take the SAME set of numbers, the same data, and pretend that it comes from batting averages. We will see that analyzing batting averages which generate a Simpson Paradox leads to different considerations regarding causal structures. The hidden and unobservable real world causal structures which generate the observable data cannot be ignored in statistical analysis, even though it is customary to do so in standard textbooks of econometrics and statistics.

Simpson’s Paradox in Baseball Scores. 

One of the central assumptions of orthodox statistical methodology is that we can do analysis of numbers without knowing their origins. The mean, median, and mode can be calculated for any list of numbers BUT the meaning of these measures depends strongly on the real-world objects which are being measured by the numbers. The orthodox model has a statistical consultant who works with a field expert. The field expert knows the causal relationships, but the statistician look only at the numbers, with minimal knowledge of what they measure. In fact, we will show that statistical analysis requires real-world knowledge and cannot be separated from the field analysis. To illustrate this principle, we consider the same numbers used for Berkeley admissions, but give them another interpretation, in the context of batting averages of baseball players.

Consider Tom and Frank, two batters who have batting averages of 56% and 44%. On the basis of these numbers, it seems clear that Tom is the better batter. At a critical moment, when the team needs a hit, the coach should send out Tom to bat, as Tom will have a higher probability of getting a hit. However, an analyst looks at the hit record more deeply, dividing the batting average according to type of pitcher: Left-Handed or Right-Handed. This leads to the following numbers.

While Tom has better over-all performance with a batting average of 56% compared to 44%, when we break it down by pitchers handedness, a different picture emerges. Frank is better against Left-Handed Pitchers, averaging 80% hits in comparison to Tom’s 60%. Similarly, Frank is better against Right-Handed Pitchers, averaging 40% against Tom’s average of only 20%. Again, we have a Simpson’s Paradox. Frank is better than Tom against Left-Handed Pitchers, and also against Right-Handed Pitchers. But in general, against all pitchers, Tom is better than Frank. How can that be? This is a classic case of “confounding”. We can illustrate confounding by a causal diagram.

BatAvg

The batting average depends on the batter performance. It also depends on the mix of left and right-handed pitchers faced by the batter. Frank’s average can vary from 80% to 40% depending on the proportion of left and right-handed pitchers that he faces. Similarly, Tom’s average can vary between 60% and 20% depending on the Mix of Left/Right pitchers that he faces. The batting average depends on two different factors – one is the batter performance and the second is the nature of the field. To evaluate batter performance, we must eliminate the confounder. One way to do so is to condition on the confounder – that is to hold it constant. This means that we should condition on Left-Handed Pitcher, and separately on Right-Handed Pitchers. Doing so leads to a clear conclusion that Frank is the better batter. If both players face the SAME proportion of left-and right-handed pitchers, then Frank will definitely do better than Tom. As long as the MIX of Left and Right Hand Pitchers is EXOGENOUS – that is, it is determined without reference to the variables under study  — then the coach should send out Frank. HOWEVER, it is also possible that the MIX is an ENDOGENOUS variable. This can happen in the following way. Frank is an exceptional batter, and has an amazing record of 80% hits against left-handed pitchers. This average declines to only 40% against right-handed pitchers. The coach of the opposing team is aware of this weakness of Frank, and switches to using right-handed pitchers when Frank comes to the bat. Thus, the normal mix of pitcher is 90% Left-Handed and 10% Right-Handed which is what Tom will see. But when Frank is sent to bat, the coach will change pitchers to heavily favor right-handed pitchers, so that Frank will see a mix of 90% Right-Handed Pitchers with only 10% Left-Handed pitchers. The causal diagram is now different:

BatAvg2

With this causal sequencing, the Left/Right Mix is NO LONGER a confounder, because it is no longer an EXOGENOUS variable. The MIX is INFLUENCED by the Batter. When we want to compute the batting average, we have to take into account the direct effect of batting ability, and also the indirect effect created by the fact that the choice of batter influences the opposing coach’s choice of the MIX between left and right handed pitchers. Taking both into account, we see that the coach should now prefer to send Tom into the field, even though Frank is the better batter. This is because the opposing coach will respond the choice of Frank by changing the pitchers, and with the changed pitcher mix, Frank will actually perform worse than Tom. One important lesson from this analysis is that knowledge of whether or not a variable is endogenous or exogenous depends on knowledge of real-world structures not directly observable in the numbers. We need to know whether or not the opposing coach looks at the batter to decide on the mix of pitchers he will use. Against ANY FIXED MIX of left and right handed pitchers, Frank will do better than Tom. However, is the mix is CHANGED and CHOSEN to face Frank with right-handed pitchers, against whom he is weak, then Frank will perform worse than Tom.

 

 

 

4 thoughts on “4-Simpson’s Paradox

  1. I appreciate the series of posts on econometrics, as they pertain very closely to Algorithmic economics ( https://goingdigital2019.weaconferences.net/papers/how-could-the-cognitive-revolution-happen-to-economics-an-introduction-to-the-algorithm-framework-theory/ ). That is, the points here can be clearly approved by Algorithm Framework Theory (AFT). All of them are the results of computing economy.

    1. It is comprehensible why data usually cannot be directly extracted into a conclusion, an answer, a theory, etc. The reason is: combinatorial explosions. That is, data can be classified, processed and explained in many many ways; the number of the ways (as demonstrated by the examples) often exceed our imagination, unable to be entirely tried within an acceptably-long period. When the proceedings relate more data outside, the “explosions” will be much heavier. By the way, this is not necessary, some other examples could really produce desirable conclusions. Everyone could raise the contrary examples.

    2. Therefore the “causal structures” are needed, which eliminate some of the ridiculous ways then mitigating the explosions, consequently only a few ways left for us to compute and the current tasks becoming easier. However, what is the “causal structures”? Where do they come from?

    3. The answer: from other or previous researches. They are perhaps the achievements of other investigations (as the author said:”we must go out into the real world and look at the structural details of how events occur. To find out about whether or not discrimination occurs, we should examine how the admissions committee works – who is on the committee, what are their opinions regarding gender-blind admissions, and the procedures used to make admissions decisions.”) but, at the exact time of answer the original question, as we have no time to investigate, what should we do next? The answer has to be: no answer! A problem is only solvable for ones with relevant knowledge, not for ones without any stock of knowledge, and not for ones with knowledge irrelevant. The “causal structures” is merely a kind of knowledge, which were in advance instilled into our brains by education, and most of which are the harvests of our ancestors. Researches advance slowly day by day, year by year, and generation by generation, to be accumulated, to be screened, and to be condensed into the oral or verbal “textbooks” for inheritance and education. The arrangement now is Algorithmically called “Roundabout Method of Production of Thoughts”.

    3. The author seems to say that an econometric research needs to import some knowledge very different from the data before his eyes, No! It is not very different, but just the results of other researches, and the research methods were similar to econometrics, rather than different; The author seems to say that the truth, or “causal structures”, are easy to acquire? Or have been stored in our brain innately? I can say, even the most “obvious” or “doubtless” “truth” (e.g. the gender) must be in origin built difficultly in long history, which looks easy to us just because it is a finished product and it has been accepted by us earlier.

    4. Although various paradoxes can be used to indicate how “bounded” the human intelligence is, they are not necessary. Human thinking are bounded anywhere and anytime, but boundless historically. The thinking world is not linear, but curved everywhere.

  2. 1 – if you will read Freedmans paper on Statistical models and shoe leather: http://psychology.okstate.edu/faculty/jgrice/psyc5314/Freedman_1991A.pdf You will learn how causality is discovered in practice – and such discoveries are commonplace.
    2 – currently econometrics has NO WAY of incorporating causal information. Directed Acyclic Graphs DAGs created by Pearl and followers for causal analysis provide an analytical tool of great power, which requires learning, just as someone who does not know calculus cannot learn how to solve problems involving differential and integrals.

    1. 1. Discoveries are commonplace, I could agree, but, this does not mean the knowledge sufficient to correctly solve a problem is easy to be entirely developed. One person or one generation could only contribute to the thesaurus of mankind knowledge marginally. When discoveries are easy, the thesaurus would become larger much much more, therefore the impossibility of re-building the thesaurus within any acceptable short time will be endogenized. Why was the re-building deemed “easy”? This is exactly mainstream economics hints. What we are talking about here is just the very core of how to reform economics.

      2. To be frank, I am not good at econometrics, but, to my understanding, the simplicity or purity of econometrics is not a defect, but a merit. What need reform is economics rather than econometrics. You certainly have the right to choose econometrics as a start toward the reform of economics, but do you know how big the task “incorporating causal information into econometrics” is? This task should be very heavy. In another word, it can be called “knowledge-based analysis”; it is so hard and complicated that it can only be carried out by computer; this is exactly what some of the “computational economists” are trying to do now, and exact what Algorithmic Economics prefers as the “formal branch” of Algorithmic Economics. The projected “huge system” may require the cooperation of global economists.

      To my understanding, all economics-reformers share similar ideas, different a little, now which can be bridged Algorithmically. Thanks.

    2. The collapse of the illusion that knowledge can be easily re-built is a milestone in the history of Artificial Intelligence Engineering, now could also the a turning point on the way of economics reform. Some very inspiring references list here: The Philosophy of Artificial Intelligence, edited by Margaret Boden, Oxford University Press1990, article 7, 8, 9, 13.

      However, nothing should be clearer than Algorithmic reasons. As combinatorial explosions happen, the intention of deducting from original information to the answer to a question, or of re-building any necessary knowledge immediately, falls in despair, and the despair is just the beginning to turn into subjectivities, or “irrationalities”, or “Mental Distortions”. This “turning” is the “orgasm” of Algorithm Framework Theory. Therefore, knowledge exists discretely, or dispersively, amounting hugely, connecting more or little, just as semi-products, but unnecessarily bound to a “finished” product. The thesaurus of knowledge is composed of pluralities, conflicts and mixtures, so it has to be accumulated historically. This is one of the reasons why AI becomes more successful now. AI relegated deductivism, now it is the turn of economics.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.