Finding a Progressive Methodology

Ever since the Global Financial Crisis, there have been an increasing number of voices calling for change in the economics curriculum/syllabus. However, even people who are sharply critical of mainstream (Rodrik, Stiglitz, Krugman) merely suggest minor and peripheral changes, and do not question the fundamental methodological basis on which neoclassical economics rests. In fact, a radical paradigm shift is required. According to current nominalist methodology, any model which produces a match to observables is a good model. The economists have lowered the bar further by not even requiring a good match, and not even comparing model results to reality. See “Friedman’s Methodology: A Stake through the Heart of Reason.” When the methodology is seriously deficient, people are allowed officially to make crazy assumptions, as long as the model produces a match with reality. For example, Paul Romer says in the Trouble with Macro: (macro) models attribute fluctuations to imaginary forces (like phlogiston), instead of agent behavior.  This methodology is such that a good model can only emerge by a random accident — just as the theory of evolution holds that life emerged by accident. I have explained how this seriously mistaken methodology came to be adopted, as a result of the wrong side winning the battle of methodologies; see “Method or Madness?

Keynesian models remain substantially superior to modern RBC and DSGE models because they can explain voluntary unemployment, which is ruled out by assumption in the latter models. They can also explain how money, banking, and debt have significant impacts on the real economy, unlike modern macro models. Nonetheless, Keynesian models were rejected in favor of dramatically inferior models — — see postscript on linked post for “70 years of Economists’ Failure to Understand the Labor Market“. When the methodology is so bad that it cannot differentiate between good and bad models, and cannot revise models in face of conflicting observations, then it become useless to debate whether any particular model is good or bad. After all, even though Solow thought that DSGE models were developed for Mars (see Solow testimony), and Lucas and Sargent were akin to madmen who believed themselves to be Napolean Bonaparte — DSGE models continue to be used throughout the world, and Lucas and Sargent are extremely respected names in the profession. This is true despite  the fact noted by Olivier Blanchard that their models make “assumptions profoundly at odds with what we know about consumers and firms.”

Building good models within the current methodology will serve no purpose; one must change the methodology to one which is CAPABLE of distinguishing between good and bad models, and which is CAPABLE of correcting and revising models when they do not match observational evidence. Such a methodology, which is radically different from what is currently in use, is available in Polanyi’s Methodology  It rejects methodological individualism in favor of giving agency to collective action — groups, communities. It rejects the isolation of economics, arguing that all dimensions of human societies — social, political, economic — interact and cannot be understood in isolation. It also asserts, contrary to mainstream views, the economic theories cannot be understood outside of their historical context, and also, history cannot be understood without considering the economic theories formulated to understand this history — since policies were based on these theories and shaped the course of history. To take this “entanglement” into account, we must study the co-evolution of theories and history. I have several posts explaining entanglement; for instance — The Entanglement of the Objective and the SubjectiveHunter-Gatherer Societies, and The Three Methodologies.

The main point I am trying to make here is that our problems with current macro and micro models cannot be resolved at the level that we are seeking solutions — that is, criticizing models as being bad, contradicted by data, meaningless, nonesensical, or absurd. Providing better models is useless, when there is no methodology (other than Solow’s smell test, infinitely subjective) to determine whether a newly proposed model is better than the previous one.  Solutions can only be found at the META-Level, where we consider theories about how theories come to be accepted. This point is made in the article ” On the Central Importance of a Meta-Theory for Economics“. The main methodological question we need to focus on is: How do we distinguish between good and bad theories? Given theory A and theory B, how do we decide which one should be used? Economists methodology is based on rules which make impossible the emergence of good theories. As Mankiw states in the intro to one of his texts (and Krugman repeats) good economic models are based on optimization and equilibrium. Overwhelming empirical evidence shows that human behavior is driven by heuristics. Studies of dynamic systems show that essential aspects of these systems are determined by what happens out of equilibrium; it is impossible to say what will happen by just calculating the equilibria of the system. So once one is committed to Optimization and Equilibrium, one has put on a blindfold that makes it impossible to see reality.  The methodological principle that a theory is good if and only if it is based on maximization/equilibria is what leads to construction of theories which are profoundly at odds with observed facts. Furthermore theories which are aligned with facts — like Keynesian involuntary unemployment — are rejected, only because economists cannot create models which align Keynes with optimization and equilibrium (though this is just due to inability of economists to understand complexity, due to which they default to single-agent models). Even though Keynes CAN be aligned with optimization/equilibrium — this has been the AGENDA of the New Keynesians, to show that standard methodology need not reject Keynesian theory — this is the WRONG agenda. The right agenda requires thinking seriously about methodology — HOW can we find a methodology which allows us to discriminate between good and models, and allows us to make PROGRESS over time, as we gradually learn to build better models, overcoming defects of previous poorer models? If we had such a methodology, we would not face a situation which, according to Romer, there has been three decades of intellectual regress, where models have become worse, and hard won knowledge has been lost.

  1. Prof Dr James Beckman, Germany said:

    Asad, as usual from you, excellent. However, you are asking a discipline to re-invent itself. Methodology, a part of philosophy. History, a part of behavioral analysis. Other disciplines, even beyond the social sciences. Wow! Many of us agree with you, but I think it will be far younger economists who will break free of their doctoral advisors to do the job. One Keynes could not, in my mind more due to outside economic forces putting pressure upon the discipline. I know some names, but the real story of how neo-liberalism came to the fore has not been written. It is similar to how the Koch Brothers have attempted to turn the general orientation of government, I expect. More about money than convincing intellectual arguments.

  2. Anonymous said:

    Only possible way is to consider disequilibrium as a normal reality. Once we allow this, we need to allow its persistence as well. Studying historical dynamics of different disequilibria with conventional macroeconomic dynamics along with underlying macroeconomic foundations would open new debates.

  3. David Harold Chester said:

    Surly the trouble with there being many different unsuitable models for this work is because these models are not made general enough to cover sufficient situations. The fact that so many situations seem to be necessary, is in itself an unnecessary aspect of the problem. It is also a fault of the claimed purpose of the modeling,which should aim to look at the Big Picture in a seamless way.

    The proposals that I have been making (see SSRN 2865571 “Einstein’s criterion Applied to Logical Macroeconomics Modeling”) for a general purpose simplified model which is flexible enough to handle changes to all the relevant variables (actually about 19), is the answer. If we are to continue to claim that certain details are missing when they are not strictly necessary for us to understand how our social system behaves as a whole, then we are going to be stuck with this petty argument forever!

  4. Asad, to point to the right answer it is a huge help when someone asks the right question, as you have done here:

    “The main methodological question we need to focus on is: How do we distinguish between good and bad theories?”

    Humans differ from animals in having language, a theory is a linguistic construct, and Claude Shannon’s “A Mathematical Theory of Communication” is all about how to distinguish good from bad messages. His mathematics merely demonstrate the reasonableness of his conclusions, which are quite simple and well known. We can detect whether we have made a typo by the probability of the resultant word being the right one given what we have previously written. We can detect whether an error has occurred in transmission by using redundant information capacity to add check digits (e.g. parity bits), which can be encoded to point to the error. The relevant papers were first published by the American telephone company, Bell, in 1948, and so successful and complete was it that it was republished as a book with a philosophical appraisal by Warren Weaver, retitled “THE Mathematical Theory of Communication”. At the time professional scientists praised it as a basis of Information Science as significant as Newton’s laws of motion in Physical Science, but being so complete and having arisen in a technical context, it seems to have gone below the horizon, conflated with Information Technology and misunderstood by Social Scientists – who are still seeing ‘information’ as the message rather than the measure of capacity (c.f. Newton’s force) necessary to correct it. The texts are available on line, but the references are:

    Shannon, C E : A Mathematical Theory of Communication, Parts 1 and II, The Bell System Technical Journal, Vol XXVII No 3, July, 1948, pp.378-423; Part III, ibid No 4, October 1948, pp.623-656.

    Shannon, C E and Weaver, W: The Mathematical Theory of Communication, 1949, The University of Illinois Press.

    This was ‘micro’ error correction. In the same year Norbert Weiner wrote a book on ‘macro’ error correction, suggestively called “Cybernetics” [meaning ‘steersman’], but misleadingly (for Social Scientists) subtitled “Control and Communication in the Animal and the Machine”. Humans are not only animals but they make control systems, while steering (generalised to navigation) is a method of control in which positional errors due to environmental forces accumulate despite directional errors being continuously controlled. On p.58-9 Weiner makes some interesting comments on the inadequacy of the Gibbsian statistical mechanics still used by economists.

    Weiner, N: Cybernetics, or Control and Communication in the Animal and the Machine, 1948, Wiley: The Technology Press.

    In interpreting Keynes’s economics the first point to be made is that he wrote his General Theory in 1936, and this is 1948, by which time he was dead. I first read the GT in 1972, having worked with separate component electronic speed control systems in 1960-1, and my immediate reaction then was that he had anticipated Weiner. Yes, the economy was being steered by prices, but consistent unemployment indicated it had drifted out of position. A crucial point in navigation is that if one avoids a hazard (e.g. changes course to avoid bankruptcy), that too takes one out of position.

    My own line of argument is that “the invisible hand” is a metaphor for economics being a control system. There are, however, several different types of control system, not least “rail-roading” or free aiming with or without steering or correction for positional drift and approaching hazards. In Shannon’s digital system his computer could control many such processes at once. There is plenty needs exploring here, by modelling the economy as if it were each of these in turn. This will enable us to learn from or discount the bad (i.e. inadequate) and recommend those forms of control which, in particular circumstances, are appropriate or “good enough”.

  5. Tim Gooding said:

    I agree with everything you say here. Just one note: if you put Kalecki’s basic equation-based model into a system dynamics model, it works. If you put his equation-based growth theory into a system dynamics model, it falls apart (per my own experiments).

    • Tim — my last four emails to you have not received any response — did you get them? asad

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: