Econometrics to Prove Everything!

I reproduce below an excerpt from my paper on “Methodological Mistakes and Econometric Consequence“, which discusses how “specification searches” allow us to fit models conforming to any theory to any data set. Thus conventional econometric methodology can be used to prove anything at all. References in the excerpt can be obtained from the bibliography in the paper. The video is a one hour talk on the full paper:

Methodological Mistakes and Econometric Consequences

Section 4.2 of paper on Methodological Mistakes and Econometric Consequences

We start with a finite data set, and seek to find a model which fits it. The required model must satisfy a large number of restrictions. In the ideal case, a model is given by the theory, and when we apply it, lo and behold, we find a miraculously good fit. This would be a wonderful confirmation of the theory, since it would be rather surprising to find a perfect fit on a first attempt. In practice, this almost never happens. We run through many different models until we find one which confirms our theory. Leamer (1978, 1983) has described the process of fitting a regression model as a “Specification search,” and argued that while this is useful for experimental data, it is either useless or misleading for observational data. This is because the large collection of tools at our disposal virtually guarantees that we can find a suitable model which conforms to whatever criteria we desire. The range of models we can try is infinite dimensional and limited only by our creativity, while the data set is fixed and finite. Test of residuals so strongly recommended by Hendry (1993, p. 24) have been appropriately called “indices of conformity”   because they are not really tests. We can and do re-design the model to ensure that the residuals   satisfy all tests.   What are the consequences of overfitting, the standard operating procedure in econometrics? As we have argued earlier, overfitting will almost certainly miss any true relationships which exist, because it will build the errors into the function in the process of minimizing them. We provide evidence that we have the “tools to fit anything” – the infinite dimensional variety of theoretical   models capable of conforming to any hypothesis about reality can fit any finite dimensional data   set.   Since Nelson and Plosser (1982) launched the literature, many authors have attempted to test whether macroeconomic time series are difference stationary or trend stationary. A lot of   statistical and economic consequences hinge on the answer. Here is a list of the conclusions of authors who have studied the US annual GNP series:

  1. Difference stationary: Nelson and Plosser (1982), Murray and Nelson (2002), Kilian and   Ohanian (2002)
  2. Trend Stationary: Perron (1989), Zivot and Andrews (1992), Diebold and Senhadji (1996),   Papell and Prodan (2003)
  3. Don’t know: Rudebusch (1993)

As is evident, consensus has not emerged, and there has been no accumulation of knowledge with   passage of time. In setting up a unit root test, we have a choice of framework within which to   test, and choice of test statistics. Atiqurrahman (2011) has shown that these make a crucial   difference to the outcome of the test. He has shown that for any time series, we can get whatever   result (trend stationarity or difference stationarity) we desire by choosing these two factors in a   suitable way.   As a second example, consider papers published which study the export-led growth (ELG)   hypothesis for Indonesia. An alternative is the Growth-led Export (GLE). We can also have bidirectional causality (BD), as well as no causality (NC). There exist studies confirming all four   hypotheses5:  

  1. ELG: Jung and Marshall (1985), Ram (1987), Hutchison and Singh (1992), Piazolo (1996), Xu   (1996), Islam (1998), Amir (2004), Liwan and Lau (2007)
  2. GLE: Ahmad and Harnhirun (1992), Hutchison and Singh (1992), Pomponio (1996), Ahmad et   al. (1997), Pramadhani et. (2007), Bahmani-Claire (2009).
  3. BD: Bahmani-Oskooee et al. (1991), Dodaro (1993), Ekanayake (1999)
  4. NC: Hutchison and Singh (1992), Ahmad and Harnhirun (1995), Arnade and Vasavada (1995),   Riezman et al. (1996), Lihan and Yogi (2003), Nushiwat (2008)

As illustrated above, on economic issues of interest, we can find published results confirming or   rejecting almost any hypothesis of interest. For example, whether or not purchasing power parity   holds, whether or not debts are sustainable, whether or not markets are efficient, etc. etc. etc. One   of the central pillars of macroeconomic theory is the consumption function. There is a huge   literature, both theoretical and empirical on estimation of the aggregate consumption function.   Thomas (1993) reviews the literature and writes that:   “Perhaps the most worrying aspect of empirical work on aggregate consumption is the regularity   with which apparently established equations break down when faced with new data. This has   happened repeatedly in the UK since the 1970’s. … the reader can be forgiven for wondering   whether econometricians will ever reach the stage where even in the short run their consumption   equations survive the confrontation with new data.” In other words, consumption functions are continuously adapted to fit new incoming data.

Magnus (1999) challenged readers to find an empirical study that “significantly changed the way   econometricians think about some economic proposition.” We provide a more precise articulation   of the challenge to conventional methodology currently under discussion. Our graduate students   take courses, pass comprehensive exams, and write theses to qualify for a Ph.D. To ensure that   they are adequately grounded in econometrics, suppose we add the following two requirements (this may be called the magnified Magnus challenge): 

  • Test 1: Take any economic theory, and support it by econometric evidence. Or, a simpler & more concrete version:   for any two arbitrarily chosen variables X and Y, produce a regression showing that X is a   significant determinant of Y.  
  • Test 2: For any current empirical paper from the literature, reach conclusions opposite to those reached in the paper using standard econometric techniques.  

How can we accomplish this? There is huge range of techniques, all of which can be   demonstrated as acceptable practice using papers published in top ranked journals. We list some   of the major ones:  

  1. For each theoretical variable can be represented by a wide variety of observable time   series. In many cases, a suitable series can be constructed, to suit requirements of the   researcher.
  2. Additional control variables, dynamic structure, length of lags chosen, provide a large   number of models to test for conformity to the desired hypothesis.
  3. Large numbers of tests, many known to have low power are available. Formulate an   appropriate null hypothesis and fail to reject it using a test of low power.
  4. Unit Roots, Nonlinearity, Functional Forms, as well as ad-hoc assumptions create a huge   range of possible models to try, one of which will surely work to confirm the desired null   hypothesis.

Virtually any outcome can be achieved by varying these parameters. Any professional   econometrician worth his salt would easily be able to pass these tests without breaking sweat.   Graduate students might have more trouble, but only really incompetent ones would be seriously   delayed in graduation by this additional requirement. Unlike most tests, where passing counts for success, failing these tests is a fundamental requirement for a good methodology. Based on the assumption that currently acceptable conventional methodological practice in econometrics can pass these tests with flying colors, we assert that:

PROPOSITION: Any methodology which can pass tests T1 and T2 is completely useless as a   methodology for production of knowledge.  

Proof: It is immediately obvious that any methodology which can prove and disprove all possible economic theories is useless.  

End of excerpt. For full paper & references cited above, see: Methodological Mistakes and Econometric Consequences.       

4 thoughts on “Econometrics to Prove Everything!

  1. Asad, As so often before, as a mathematician, I get that the issues you raise are of great importance and am inclined to agree with your findings, in so far as they apply to ‘practical action’. But scanning your paper I see that you still have a downer on ‘logical positivism’ and cite Suppes (1977), which ref is not provided at the end. I’d be interested to have that ref, as it may help me to understand where social scientists are ‘coming from’. My own view on econometrics is more like that of .

    Where I agree with you is that logical positivism inspired some weird views. Coincidentally, I’ve been trying to make sense of A. Golan Information and Entropy Econometrics — A Review and Synthesis, Foundations and Trends in Econometrics Vol. 2, Nos. 1–2 (2006) 1–145.

    Bayes pointed out an obvious problem with this: the best one could do with such methods is to characterise the mechanism as it has operated up to now, which may not be all that helpful in the face of innovation or failure, if you want to think about what might happen next. But surely, even allowing for Suppes concerns, one ought to be able to use statistics to characterise the past, assuming no great changes? What Golan seems to claim is that econometricians tend to rely on some sort of ‘maximum entropy principle’. But what does that mean?

    In cosmology, the principle has been used to minimise the ‘false positives’ when looking for planets. In economics, if you have certain beliefs that you treat as ‘given laws’ (like Newton’s or Einstein’s) then the same approach minimises your chance of uncovering anomalies. I’m guessing this isn’t what you want. Often, it seems to me, there is some things ‘going on’ in economies that aren’t covered by the mainstream theories, and what we want to do is uncover them, not obscure them.

    So just as the problem might not be logical positivism per se, but how it has been ‘used’, maybe the problem is not (to quote Wikipedia) “the application of statistical methods to economic data in order to give empirical content to economic relationships”, but the use of methods based on inappropriate principles?

    P.S. Thanks for prompting me to check Suppes later views: I’ve heard before from social scientists that he had had some unhelpful views: good to see he’s moved on.)

  2. Dave, I value your thoughtful comments. I read the Suppes essay you linked – I think that my definition of probability given in an earlier blog resolves some of the problems he raises. For others, I need to study quantum a little more, which has been on my agenda for a while, but I am too busy with other projects to give it the time needed. Regarding logical positivism, have you seem my earlier blog post on the Emergence of Logical Positivism? It took me years to detoxify my mind of positivist poisons, but it was immensely rewarding — one MUST look beyond the observables to find meanings in our lives and in science as well; positivism prevents us from doing so. From time to time I have thought about founding a club called “Positivists Anonymous” to help ex-positivists who are struggling to get over the effects of this philosophy

    1. Thanks. I will make time to look at your work on probability, honest. Meanwhile I checked . I tend to agree with it when it says “logical positivism has been generally misrepresented, sometimes severely. Arguing for their own views, often framed versus logical positivism, many philosophers have reduced logical positivism to simplisms (sic) and stereotypes”. Would you lose anything by confining your criticisms to or instead, leaving open the question as to how to try to “prevent confusion rooted in unclear language and unverifiable claims”, if not by being ‘positive’ about the application of logic (which I tend to be)?

    2. I have looked at your thoughts on probability and commented their. Whilst I would start in a different place and use different language, and haven’t checked all your details, your general approach does seem to help avoid some common confusions, of significance for economics. I would go further, though. But first, what do you make of my remarks on logical positivism? (I have previously commented on some of your postings on the subject, but haven’t really got my head around what you think logical positivism is.)

      It seems to me that there are perfectly sound logical theories of both geometry and probability, and that if they are worth learning they must have some correspondence to reality. The most ‘positive’ view would be that they have some direct correspondence, but this is not tenable. In the case of geometry this was well known to the Vienna Circle, amongst others, but in the case of probability theory it was much discussed without leading to any clear resolution. The problem with economics, it seems to me, is that it largely ignores logic, at least when it suits ‘them’. More generally, if you can use some theory to prove anything you like, the theory must surely be wrong, and the resolution is surely to use more or better logic, not less?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.