Ever since the Global Financial Crisis, there have been an increasing number of voices calling for change in the economics curriculum/syllabus. However, even people who are sharply critical of mainstream (Rodrik, Stiglitz, Krugman) merely suggest minor and peripheral changes, and do not question the fundamental methodological basis on which neoclassical economics rests. In fact, a radical paradigm shift is required. According to current nominalist methodology, any model which produces a match to observables is a good model. The economists have lowered the bar further by not even requiring a good match, and not even comparing model results to reality. See “Friedman’s Methodology: A Stake through the Heart of Reason.” When the methodology is seriously deficient, people are allowed officially to make crazy assumptions, as long as the model produces a match with reality. For example, Paul Romer says in the Trouble with Macro: (macro) models attribute fluctuations to imaginary forces (like phlogiston), instead of agent behavior. This methodology is such that a good model can only emerge by a random accident — just as the theory of evolution holds that life emerged by accident. I have explained how this seriously mistaken methodology came to be adopted, as a result of the wrong side winning the battle of methodologies; see “Method or Madness?”
Keynesian models remain substantially superior to modern RBC and DSGE models because they can explain voluntary unemployment, which is ruled out by assumption in the latter models. They can also explain how money, banking, and debt have significant impacts on the real economy, unlike modern macro models. Nonetheless, Keynesian models were rejected in favor of dramatically inferior models — — see postscript on linked post for “70 years of Economists’ Failure to Understand the Labor Market“. When the methodology is so bad that it cannot differentiate between good and bad models, and cannot revise models in face of conflicting observations, then it become useless to debate whether any particular model is good or bad. After all, even though Solow thought that DSGE models were developed for Mars (see Solow testimony), and Lucas and Sargent were akin to madmen who believed themselves to be Napolean Bonaparte — DSGE models continue to be used throughout the world, and Lucas and Sargent are extremely respected names in the profession. This is true despite the fact noted by Olivier Blanchard that their models make “assumptions profoundly at odds with what we know about consumers and firms.”
Building good models within the current methodology will serve no purpose; one must change the methodology to one which is CAPABLE of distinguishing between good and bad models, and which is CAPABLE of correcting and revising models when they do not match observational evidence. Such a methodology, which is radically different from what is currently in use, is available in Polanyi’s Methodology It rejects methodological individualism in favor of giving agency to collective action — groups, communities. It rejects the isolation of economics, arguing that all dimensions of human societies — social, political, economic — interact and cannot be understood in isolation. It also asserts, contrary to mainstream views, the economic theories cannot be understood outside of their historical context, and also, history cannot be understood without considering the economic theories formulated to understand this history — since policies were based on these theories and shaped the course of history. To take this “entanglement” into account, we must study the co-evolution of theories and history. I have several posts explaining entanglement; for instance — The Entanglement of the Objective and the Subjective, Hunter-Gatherer Societies, and The Three Methodologies.
The main point I am trying to make here is that our problems with current macro and micro models cannot be resolved at the level that we are seeking solutions — that is, criticizing models as being bad, contradicted by data, meaningless, nonesensical, or absurd. Providing better models is useless, when there is no methodology (other than Solow’s smell test, infinitely subjective) to determine whether a newly proposed model is better than the previous one. Solutions can only be found at the META-Level, where we consider theories about how theories come to be accepted. This point is made in the article ” On the Central Importance of a Meta-Theory for Economics“. The main methodological question we need to focus on is: How do we distinguish between good and bad theories? Given theory A and theory B, how do we decide which one should be used? Economists methodology is based on rules which make impossible the emergence of good theories. As Mankiw states in the intro to one of his texts (and Krugman repeats) good economic models are based on optimization and equilibrium. Overwhelming empirical evidence shows that human behavior is driven by heuristics. Studies of dynamic systems show that essential aspects of these systems are determined by what happens out of equilibrium; it is impossible to say what will happen by just calculating the equilibria of the system. So once one is committed to Optimization and Equilibrium, one has put on a blindfold that makes it impossible to see reality. The methodological principle that a theory is good if and only if it is based on maximization/equilibria is what leads to construction of theories which are profoundly at odds with observed facts. Furthermore theories which are aligned with facts — like Keynesian involuntary unemployment — are rejected, only because economists cannot create models which align Keynes with optimization and equilibrium (though this is just due to inability of economists to understand complexity, due to which they default to single-agent models). Even though Keynes CAN be aligned with optimization/equilibrium — this has been the AGENDA of the New Keynesians, to show that standard methodology need not reject Keynesian theory — this is the WRONG agenda. The right agenda requires thinking seriously about methodology — HOW can we find a methodology which allows us to discriminate between good and models, and allows us to make PROGRESS over time, as we gradually learn to build better models, overcoming defects of previous poorer models? If we had such a methodology, we would not face a situation which, according to Romer, there has been three decades of intellectual regress, where models have become worse, and hard won knowledge has been lost.