Archive

Author Archives: Asad Zaman

Talk at PIDE Nurturing Minds Seminar on 29th Nov 2017. Based on “Lessons in Econometric Methodology: Axiom of Correct Specification”, International Econometric Review, Vol 9, Issue 2.

Modern econometrics is based on logical positivist foundations, and looks for patterns in the data. This nominalist approach is seriously deficient, as I have pointed out in Methodological Mistakes and Econometric Consequences. These methodological defects are reflected in sloppy practices, which result in huge numbers of misleading and deceptive regression results — nonesense or meaningless regressions. The paper and talk below deals with one very simple issue regarding choice of regressors which is not explained clearly in textbooks and leads to serious mistakes in applied econometrics papers.

BRIEF SUMMARY OF TALK/PAPER:

Conventional econometric methodology, as taught in textbooks, creates serious misunderstandings about applied econometrics. Econometricians try out various models, select one according to different criteria, and then interpret the results. The significance of the fact that interpretations are only valid if the model is CORRECT are not highlighted in textbooks. The result is that everyone presents and interprets their models as if the model was correct. This relaxed assumption – that we can assume correct any model that we put down on paper, subject to minor checks like high R-squared and significant t-stats – leads to dramatically defective inferences. In particular, ten different authors may present 10 different specifications for the same variable, and each may provide an interpretation based on the assumption that his model is correctly specified. What is not realized is that there is only one correct specification, which must include all the determinants as regressor, and also exclude all irrelevant variables (though this is not so important). This means that out of millions of regressions based on different possible choices of regressors, only one is correct, while all the rest are wrong. Thus all 10 authors with 10 different specifications cannot be right – at most one of them can be right. In this particular case, we could see at least 90% of the authors are wrong. This generally applies to models published in journals – the vast majority of different specification must be wrong.

Now the question arises as to how much difference this Axiom of Correct Specification makes. If we can get approximately correct results, then perhaps the current relaxed methodology is good enough as a beginning point. Here the talk/paper demonstrates that if one major variable is omitted from the regression model, than anything can happen. Typically, completely meaningless regressors will appear to be significant. For instance, if we regress the consumption of Australia on the GDP of China, we find a very strong regression relationship with R-squared above 90%. Does this means that China’s GDP determines 90% of the variation in Australian consumption. Absolutely not. This is a nonsense regression, also known as a spurious regression. The nonsense regression is cause by the OMISSION of an important variable – namely Australian GDP, which is the primary determinant of Australian Consumption. A major and important assertion of the paper is that the idea that nonsense regressions are caused by INTEGRATED regressors is wrong. This means that the whole theory of integration and co-integration, developed to resolve the problem of nonsense regression, is searching for solutions in the wrong direction. If we focus on solving the problem of selecting the right regressors – ensuring inclusion of all major determinants – then we can resolve the problem of nonsense or meaningless regressions.

Next we discuss how we can ensure the inclusion of all major determinants in the regression equation. Several strategies currently in use are discussed and rejected. One of these is Leamer’s strategy of extreme bounds analysis, and some variants of it. These do not work in terms of finding the right regressors. Bayesian strategies are also discussed. These work very well in the context of forecasting, by using a large collection of models which have high probabilities of being right. This works by diversifying risk – instead of betting on any one model to be correct, we look at a large collection. However, it does not work well for identifying the one true model that we are looking for.
The best strategy currently in existence for finding the right regressors is the General-to-Simple modeling strategy of David Hendry. This is the opposite of standard simple-to-general strategy advocated and used in conventional econometric methodology. There are several complications in applying this strategy, which make it difficult to apply. It is because of these complications that this strategy was considered and rejected by econometrician. For one thing, if we include a large number of regressors, as GeTS required, multicollinearities emerge which make all of our estimates extremely imprecise. Hendry’s methodology has resolved these, and many other difficulties, which arise upon estimation of very large models. This methodology has been implemented in Autometrics package within the PC-GIVE software for econometrics. This is the state-of-the-art in terms of automatic model selection, based purely on statistical properties. However, it is well established that human guidance, where importance of variables is decided by human judgment about real-world causal factors, can substantially improve upon automatic procedures. It is very possible, and happens often in real world data sets, that a regressor which is statistically inferior, but is known to be relevant from either empirical or theoretical considerations, will outperform a statistically superior regressor, which does not make sense from a theoretical perspective. A 70m video-lecture on YouTube is linked below. PPT Slides for the talk, which provide a convenient outline, are available from SlideShare: Choosing the Right Regressors. The paper itself can be downloaded from “Lessons in Econometric Methodology: The Axiom of Correct Specification

 

For this post on my personal website: asadzaman.net, see: http://bit.do/azreg

For my lectures & courses on econometrics, see: https://asadzaman.net/econometrics-2/

For a list of my papers on Econometrics, see: Econometrics Publications

 

 

Advertisements

The title is an inversion of De-Finetti’s famous statement that “Probability does not exist” with which he opens his famous treatise on Probability. My paper, discussed below, shows that the arguments used to establish the existence of subjective probabilities, offered as a substitute for frequentist probabilities, are flawed.

The existence of subjective probability is established via arguments based on coherent choice over lotteries. Such arguments were made by Ramsey, De-Finetti, Savage and others, and rely on variants of the Dutch-Book, which show that incoherent choices are irrational – they lead to certain loss of money. So every rational person must make coherent choices over a certain set of especially constructed lotteries. Then the subjectivist argument shows that every coherent set of choices corresponds to a subjective probability on part of the decision maker. Thus we conclude that rational decision makers must have subjective probabilities. This paper shows that coherent choice over lotteries leads to weaker conclusion than the one desired by subjectivists. If a person is forced to make coherent choices for the sake of consistency in certain specially designed environment, that does not “reveal” his beliefs. The decision may arbitrarily chose a “belief”, which he may later renounce. To put this in very simple terms, suppose you are offered a choice between exotic fruits Ackee and Rambutan, neither of which you have tasted. Then the choice you make will not “reveal” your preference. But preferences are needed to ensure stability of this choice, which allows us to carry it over into other decision making environments.

The distinction between “making coherent choices which are consistent with quantifiable subjective probabilities” and actually having beliefs in these subjective probabilities was ignored in era of dominance of logical positivism, when the subjective probability theories were formulated. Logical positivism encouraged the replacement of unobservables in scientific theories by their observational equivalents. Thus unobservable beliefs were replaced by observable actions according to these beliefs. This same positivist impulse led to revealed preference theory in economics, where unobservable preferences of the heart were replaced by observable choices over goods. It also led to the creation of behavioral psychology where unobservable mental states were replaced by observable behaviors.

Later in the twentieth century, Logical Positivism collapsed when it was discovered that this equivalence could not be sustained. Unobservable entities could not be replaced by observable equivalents. This should have led to a re-thinking and re-formulation of the foundations of subjective probability, but this has not been done. Many successful critiques have been mounted against subjective probability. One of them (Uncertainty Aversion) is based on the Ellsberg Paradox, which shows that human behavior does not conform to the coherence axioms which lead to existence of subjective probability. A second line of approach, via Kyburg and followers, derive flawed consequences from the assumption of existence of subjective probabilities. To the best of my knowledge, no one has directly provided a critique of the foundational Dutch Book arguments of Ramsey, De-Finetti, and Savage. My paper entitled “Subjective Probability Does Not Exist” provides such a critique. A one-hour talk on the subject is linked below. The argument in a nutshell is also given below.

The MAIN ARGUMENT in a NUTSHELL:

Magicians often “force a card” on a unsuspecting victim — he thinks he is making a free choice, when in fact the card chosen is one that has been planted. Similarly, subjectivists force you to create subjective probabilities for uncertain events E, even when you avow lack of knowledge of this probability. The trick is done as follows.  I introduce two lotteries. L1 pays $100 if event E happens, while lottery L2 pays $100 if E does not happen. Which one will you choose? If you don’t make a choice, you are a sure loser, and this is irrational. If you choose L1, then you reveal a subjective probability P(E) greater than or equal to 50%. If you choose L2, then you reveal a subjective probability P(E) less than or equal to 50%. Either way, you are trapped. Rational choice over lotteries ensures that you have subjective probabilities. There is something very strange about this argument, since I have not even specified what the event E is. How can I have subjective probabilities about an event E, when I don’t even know what the event E is? If you can see through the trick, bravo for you! Otherwise, read the paper or watch the video.. What is amazing is how many people this simple sleight-of-hand has taken in. The number of people who have been deceived by this defective argument is legion. One very important consequence of widespread acceptance of this argument was the removal of uncertainty from the world. If rationality allows us to assign subjective probabilities to all uncertain events, than we only face situations of RISK (with quantifiable and probabilistic uncertainty) rather then genuine uncertainty where we have no idea what might happen. Black Swans were removed from the picture.

Lecture 5 of Advanced Microeconomics at PIDE. The base for this lecture Hill & Myatt Anti-Textbook Chapter 4 on Consumer Theory.

Hill and Myatt cover three criticisms of conventional microeconomic consumer theory.

  1. Economic theory considers preference formation as exogenous. If the production process also creates preferences via advertising, this is not legitimate.
  2. Consumers are supposed to make informed choices leading to increase welfare. However, deceptive advertising often leads consumers to make choices harmful to themselves. The full information condition assumed by Economics is not valid.
  3. Economic theory is based on methodological individualism, and treats all individual separately. However, many of our preferences are defined within a social context, which cannot be neglected.

Before discussing modern consumer theory, it is useful to provide some context and

1      Historical Background:

In a deeply insightful remark, Karl Marx said that Capitalism works not just by enslaving laborers to produce wealth for capitalists, but by making them believe in the necessity and justness of their own enslavement. The physical and observable chains tying the exploited are supplemented by the invisible chains of theories which are designed to sustain and justify existing relationships of power. Modern economic consumer theory is an excellent illustration of these remarks. Read More

The bull charges the red flag being waved by the matador, and is killed because he makes a mistake in recognizing the enemy.  A standard strategy of the ultra-rich throughout the ages has been to convince the masses that their real enemy lies elsewhere. Most recently, Samuel Huntington created a red flag when he painted the civilization of Islam as the new enemy, as no nation was formidable enough to be useful as an imaginary foe to scare the public with. Trillions of dollars have since been spent in fighting this enemy, created to distract attention from the real enemy.

The financial deregulation initiated in the Reagan-Thatcher era in the 1980s was supposed to create prosperity. In fact, it has resulted in a sky-rocketing rise in inequality. The gap between the richest and the poorest has become larger than ever witnessed in history. Countless academic articles and books have been written to document, explain and attempt to provide solutions to the dramatic increase in inequality. The American public does not need these sophisticated data and theories; it experiences the fact, documented in The Wall Street Journal, that the quality of jobs and wage earnings are lower today than they were in the 1970s. Growing public awareness is reflected in several movies about inequality. For instance, Elysium depicts a world where the super-rich have abandoned the ruined surface of the planet Earth to the proles, and live in luxury on a satellite.

The fundamental cause of growing inequality is financial liberalisation. Just before the Great Depression of 1929, private banks gambled wildly with depositors’ money, leading to inflated stocks and real estate prices. Following the collapse of 1929, the government put stringent regulations on banking. In particular, the Glass-Steagall Act prohibited banks from speculating in stocks. As a result, there were few bank failures, and widespread prosperity in Europe and the US in the next 50 years. Statistics show that the wealth shares of the bottom 90 per cent increased, while that of the top 0.1 per cent decreased until 1980. To counteract this decline, the wealthy elite staged a counter-revolution in the 1980s, to remove restrictive banking regulations.

As a first step, Reagan deregulated the Savings and Loan (S&L) Industry in the Garn-St Germain Act of 1982. He stated that this was the first step in a comprehensive programme of financial deregulation, which would create more jobs, more housing and new growth in the economy. In fact, what happened was a repeat of the Great Depression. The S&L industry took advantage of the deregulation to gamble wildly with the depositors’ money, leading to a crisis which cost $130 billion to the taxpayers. As usual, the bottom 90 per cent paid the costs, while the top 0.1 per cent enjoyed a free ride. What is even more significant is the way this crisis has been written out of the hagiographies of Reagan, and erased from public memory. This forgetfulness was essential to continue the programme of financial deregulation which culminated with the repeal of the Glass-Steagall Act, and the enactment of the Financial Modernization Act in 2000. Very predictably, the financial industry took advantage of the deregulation to create highly complex mortgage-based financial instruments worth trillions, but with hidden risks. A compliant ratings industry gave these instruments fraudulent AAA rating, in order to sell them to unsuspecting investors. It did not take long for the whole system to crash in the Global Financial Crisis (GFC) of 2008.

Unlike the Great Depression of 1929, the wealthy elite were fully prepared for the GFC 2008. The aftermath was carefully managed to ensure that restrictive regulations would not be enacted. As part of the preparation, small media firms were bought out, creating a heavily concentrated media industry, limiting diversity and dissent. Media control permitted shaping of public opinion to prevent the natural solution to the mortgage crisis being implemented, which would have been to bail out the delinquent mortgagors. Princeton economists Atif Mian and Amir Sufi have shown that this would have been a far more effective and cheaper solution. Instead, a no-questions-asked trillion dollar bailout was given to the financial institutions which had deliberately caused the disaster. Similarly, all attempts at regulation and reform were blocked in Congress. As a single example, the 300-page Dodd-Frank Act was enacted as a replacement for the 30-page Glass-Steagall Act. As noted by experts, any competent lawyer can drive a truck through the many loopholes deliberately created in this complex document. This is in perfect conformity with the finding of political scientists Martin Gilens and Benjamin Page that in the past few decades, on any issue where the public interest conflicts with that of the super-rich, Congress acts in favour of the tiny minority, and against public interest. Nobel Laureate Robert Shiller, who was unique in predicting the GFC 2008, has said recently that we have not learnt our lesson from the crisis, and new stock market bubbles are building up. A new crash may be on the horizon.

While billions sink ever deeper into poverty, new billionaires are being created at an astonishing rate, all over the globe — in India, China, Brazil, Russia, Nigeria, etc. Nations have become irrelevant as billionaires have renounced national allegiances and decided to live in small comfortable enclaves, like the Elysium. They are now prepared to colonise the bottom 90 per cent even in their own countries. The tool of enslavement is no longer armies, but debt — both at the individual and national levels. Students in the US have acquired trillion-plus dollars of debt to pay for degrees, and will slave lifetimes away, working for the wealthy who extended this debt. Similarly, indebted nations lose control of their policies to the IMF. For example, ex-Nigerian president Olusegun Obasanto said that “we had borrowed only about $5 billion up to 1985. Since then we have paid $16 billion, but $28 billion still remains in interest on the original debt.”

Like the gigantic and powerful bull, each pass through a financial crisis wounds the bottom 90 per cent by putting them deeper in debt, while strengthening the matador of the top 0.1 per cent. Sometimes, the bull can surprise the matador by a sudden shift at the last moment. On this thrilling possibility hangs the outcome of the next financial crisis: the masses achieve freedom from debt slavery, or the top 0.1 per cent succeeds in its bid to buy the planet, and the rest of us, with its wealth.

Karl Marx said that “The advance of capitalist production develops a working class which by education, tradition and habit looks upon the requirements of that mode of production as self-evident natural laws.” Modern economic theory is a tool of central importance in making the laborers and the poor accept their own exploitation as natural and necessary. As explained in greater detail in the next lecture (AM09), Economic Theory argues that distribution of income is

  • FAIR – everyone gets what they deserve, in proportion to what they contribute (the marginal product)
  • NECESSARY – the laws of economics ensure that this is the only distribution which will prevail in equilibrium
  • EFFICIENT – this distribution creates efficient outcomes, and maximal productivity in the economic system.

In fact, as I have argued elsewhere, neoclassical Economic Theory should be labeled as ET1% (Economic Theory of the Top 1%), because it only represents their interests, and glosses over issues of central importance and concern to bottom 90%. Nonetheless, widespread propagation of this theory through university courses, and popular expositions for the general public, are very important in convincing the bottom 90% that the capitalist economic system is the best possible, and their own misfortunes are due to their own bad luck or other defects.

1      Classical Economic Theory

According to classical economic theory, free markets automatically eliminate unemployment, guaranteeing jobs for everyone at a fair wage, consonant with the productivity of labor. In particular, payoff to labor and to capital is perfectly symmetric – both factors get what they deserve. If government tries to regulate the labor market to create better outcomes – minimum wages, better working conditions, labor unions, etc. — it will actually end up hurting laborers. Economists argue that unemployment is due to minimum wage laws, labor unions, and search costs, and not due to free markets themselves.

2      Credit Creation By Banks

Although this is denied by conventional textbooks, banks create money when they make loans. Thus, outstanding credits which banks extend are always greater than their cash reserves (which accounts for the name “Fractional Reserve” banking system). Because bank profits are directly linked to the amount of credit they create, they are incentivized to maximize credit creation, and hence also to maximize the risks of a crisis when depositors panic and ask for money that the bank does not have in its possession. As detailed in “The Web of Debt” by Ellen Brown, financiers created artificial banking crises to scare the public into creating the Federal Reserve Bank in 1914, with the duty of bailing out banks in trouble, by extending them loans to cover their shortfalls. The FRB was created to prevent banking crises, but it actually led to biggest crisis of 20th century, the Great Depression of 1929 (GD ’29). With the FRB behind them, banks went on a credit creation spree, unconstrained by fears of potential crises. Credit creation is only possible when people want loans, and banks invented many different types of mechanisms to encourage people to borrow. They created “the American Dream” to create a consumer society, and instalment sales to sell loans for all sorts of consumer goods. They went further to encourage people to borrow in order to invest in stocks and land so that money can be made through speculation. This was the cause of roaring 1920’s, also known as the Gilded Age, when those with access to finance got very rich very fast.

3      The Great Depression

Like all artificial booms created by speculation, not backed by any real factor, the financial bubble burst in a stock market crash in 1929. The Great Depression was the worst economic crisis in American history, one that profoundly affected every area of American life and left psychic scars that still affect millions of families. With unemployment insurance nonexistent and public relief inadequate, the loss of a job meant economic catastrophe for workers and their families. By 1930, 4.2 million workers, 9 percent of the labor force, were out of work. Unemployment struck families by destroying the traditional role of the male breadwinner.

4      Two Revolutions

After Great Depression two revolutions took place of which first was regulation of financial industry and second was in economic theory.  Among the financial regulations, an important one was the Glass-Steagall act which prevents banks from speculation in stocks. Banks were prohibited to compete, and restricted to operate in one state only. The Chicago Plan to eliminate fractional reserve banking, and move to a 100% reserve system was also proposed and approved by a hundred and fifty economists of the time, but the financial lobby successfully blocked its passage.

The second revolution, in economic theory, was launched by Keynes. He said unemployment is not eliminated by free markets. So, the ideas of classical economic theory that supply and demand automatically eliminates the unemployment is wrong and government needs to intervene to get full employment. Keynes also punctured myths about money propagated by ET1%, namely the money is neutral, and has no real effects on the economy. This myth – that money is veil you must push aside in order to look at the workings of the real economy – is very useful to the 1% to hide the crucial role that money plays in funneling wealth to the rich, and in exploiting the poor.

5      Effect of Twin Revolution

Financial Regulations constrained the power of big money, and government policies to achieve full employment helped improve the lot of the bottom 90% substantially. The graph below shows how the share of the top 10% dropped drastically from 1940 to 1980 (start of the Reagan-Thatcher era). In the roaring twenties, the power of finance led to the rising share of top 0.1% creating the gilded age. Incidentally, it is important to note that the concept of GNP per capita systematically prevents us from looking into inequality because it takes all of the wealth that is produced in a country and distributes it equally among whole population. This is also part of ET1%, the systematic deception required to keep the bottom 90% content with its lot.

chart-01

Chart from The New Yorker: Piketty in Six Charts

After 1929, after the two revolutions took place, the share of bottom 90% started to rise and that of to 0.1% stared to go down. Top 0.1% were very unhappy from this state of affair and plotted a counter revolution. The master strategist, Milton Friedman, said that change can be created during a period of crisis (see The Shock Doctrine). So, the 1% prepared their theories and economic plans, and patiently waited for a shock. The Oil Crisis of 1970’s led to stagflation created the opportunity for Chicago school free market economists to discredit Keynesian theory. In fact, stagflation was due to cost-push inflation instead of demand-pull inflation, and Keynesian theory can easily be adapted to explain it. However, due to a large number of pre-planned and co-ordinated strategms on multiple fronts, Chicago School theories of free markets, as well as policies, became dominant after this crisis. (see Ideological Macroeconomics and Increasing Inequality.) As the graph shows, de-regulation of finance, and de-empowerment of labor lead to increasing wealth share of the rich, and declining share of the poor.

6      Consequences of Counter-Revolution

Due to counter-revolution in the 1970’s and 80’s, the distribution of wealth entirely changed. Only top 20% of the USA got 90% of total wealth, second 20% got 9.4% of the total wealth, third 20% have only 2.6% of total wealth in USA while the bottom 40% have -0.9% of wealth which means they are in debt actually with negative wealth. This is the income distribution that currently exist in USA after counter revolution by free market propagators.

Gradually, the effects of BOTH revolutions were reversed. The Quantity theory of money was re-implanted after its rejection and refutation by Keynes. The standard theory of labor currently being taught does not recognize the possibility of involuntary unemployment that Keynes introduced. (see The Keynesian Revolution and the Monetarist Counter-Revolution) Also, the Glass-Steagall act was repealed in 1999, and the Commodity Futures Modernization Act was passed in 2000. This gave an enormous amount of power to the financial lobby, creating unregulated arenas for their activities, and leading to the emergence of a vast “shadow” banking industry. The consequences were exactly the same as before – a spectacular crash only 8 years after the repeal of Glass-Steagall – the Global Financial Crisis of 2007.

So today we have gone around full circle, and stand exactly where we did a century ago, prior to GD ’28, with Pre-Keynesian economic theories about money and labor markets, and pre-Keynesian unregulated financial markets. However there are some important differences. The top 1% is MUCH better prepared this time around. They have blocked all attempts at financial reforms in Congress (unlike the aftermath of GD ’29. They have also battened down the hatches to prevent revolution in Economic Theory, and are using creating strategies to both protect neoclassical theory. Even more worrisome are their efforts create camps within heterodoxy (like INET, MMT, CORE Micro) which will create justifications for wealth even after rejecting neoclassical economics. Thing look much worse for the bottom 89% today.

A 22 minute video covering the ideas expressed above is linked below:

Varian start his intermediate micro text by stating the maximization and equilbrium are the core principles of micro. Krugman recently stated that I am a “maximization and equilibrium” kind of guy. The goal of this lecture is to show that these two principles fail completely to help us understand behavior is a very simple model of a duopoly.

In last lecture (AM03), we introduced a simple duopoly model. Two ice-cream vendors buy ice-cream wholesale and can sell at any chosen price in the park. If they have matching prices, they split customers. Under Perfect Competition assumptions, with Full Information and Zero Transaction Costs, if they have different prices, then all customers go to the lower price vendor. Straightforward analysis of this duopoly model leads to the following conclusions:

  1. There is a huge amount of genuine uncertainty – probability calculations required for expected utility cannot be made. We cannot know how many people will come to the park on any given day. We cannot forecast the weather conditions, which influence the demand for ice-cream, with any degree of reliability. This means that vendors will adopt rules-of-thumb to make decisions, rather than maximize anything. This leads to the use of evolutionary Agent Based Models as the preferred modeling technique.
  2. The strategic calculations we make in the lecture, required by neoclassical theory, are based on EXTREMELY over-simplifying assumptions. In particular, full information eliminates the uncertainty, which in real life, would make estimation of the demand function extremely difficult. Not knowing the demand function, and not knowing the strategy (for price and quantity) that the other vendor will choose, the first vendor cannot possibly calculate profits as a function of his actions. This means that there is no function to “maximize”. The lecture gets over this hurdle by making extremely unrealistic assumptions, to allow us to calculate the demand, so that we can operate in a neoclassical framework. Both sellers know exactly what the other one is doing and exactly how many customers each will get. Furthermore, we show that these are not good approximations, in that when we relax these assumptions, entirely different results emerge.
  3. The relation between my actions, and their consequences, is mediated by an uncertain environment (weather, number of people), and by a strategic reaction (what the other guy will do). Economists use a revealed preference argument (due to Savage, Ramsey, De-Finetti) that “uncertainty” (horse races) can be reduced to “risk” (dice rolls). This allows them to use subjective Expected utility theory to create target function to maximize in conditions of uncertainty. However, I have shown elsewhere that this reduction is not legitimate. The standard Dutch book argument use to reduce uncertainty to risk is wrong. This material is not covered in the lecture; see my paper on “Subjective Probability Does Not Exist”.
  4. By changing the parameters (the fixed costs, variable costs, demand) we can get NO equilibria, unstable equilibria, and multiple equilibria. Knowledge of equilibria does not tell us anything about what will happen in the real world. What is all important for understanding behavior of dynamic systems is the disequilibrium dynamics: how do the vendors behave when out of equilibrium. It is this behavior that determines what will happen – convergence to equilibrium, divergence away from equilibrium, or continuous cycling between multiple equilibrium.
  5. In particular, if we relax the assumptions of full information and zero transaction costs, we find multiple equilibria, including some at which the two vendors charge different prices. This violates the law of one price – the two sellers are identical and selling the identical good, but they charge different prices for it. This is based on the reasonable assumption that even if one vendor charges a higher price, not all customers will leave him to go find the cheaper seller. Either customers do not have information, or they incur transaction costs by walking to the next stall. This shows the extreme sensitivity of supposedly central conclusions of economic theory to the virtually impossible assumptions of full information and zero transaction costs.
  6. The typical configuration of costs and profits leads to a Prisoner’s Dilemma in the Duopoly – both parties can profit by cooperation, by agreeing to charge the high monopoly price. However, the individual incentive is to under-cut the price, which leads to complete capture of a smaller profit. If both parties under-cut, then they both share a smaller profit. This is a “social dilemma”, where pursuit of selfish individual incentives leads to loss to both players. This is exactly the opposite of the “Invisible Hand” where pursuit of selfish motives (supposedly) leads to social benefits. Conventional Textbooks mention social dilemmas, but do not point out the conflict with the glorious Invisible Hand, since that would go against the ideological theme of free and unregulated markets creating efficiency.

To summarize, even in very simple real world situations, “maximization” is not possible because there is genuine uncertainty, which cannot be reduced to risk (quantifiable uncertainty). We simply do not know, and cannot calculate, the consequences of our actions, because there are too many other variables which determine this outcome. In addition, as studies of dynamic systems reveal, behavior in such systems is governed by disequilibrium dynamics, and not by the equilibria. In complex systems, study of the equilibria will not reveal any interesting aspects of the behavior, showing the economists must study what happens out of equilibrium to understand how the economy will behave. Thus “maximization” and “equilibrium” are not useful tools to study the behavior of even very simple economic systems. Furthermore, the central teaching that if every is free to maximize, this leads to socially optimal outcomes is directly violated in Prisoner’s dilemma situations where pursuit of individual profits cause harm to society, and even to the individual selfishly pursuing his own profits. Thus the main rhetorical strategy of conventional textbooks is to HIGHLIGHT the polar extreme cases where theory of perfect competition holds, which supports their ideological stance. A vast range of cases which deviate, even slightly, from this PERFECTION, are completely neglected and ignored, because they lead to situations where free markets create bad outcomes. Overcoming market failure requires either government intervention, or utilization of social dimensions of human behavior – humans know how to cooperate, and to sacrifice individual/personal gains for welfare of society. Both of these ideas go against the core ideology of conventional textbooks and hence are not pointed out.

Link to Video-Lecture and a detailed 3500 word outline/summary is given below

asadzaman.net/am04-duopoly/

Building on the analysis of Supply and Demand in Chapter 3 of Hill and Myatt’s Anti-Textbook, this lecture constructs a very simple model of monopoly and duopoly, to show that policy implications in these cases differ dramatically from what conventional textbooks teach. The higher level goal is to teach students Meta-Theoretical thinking. This goes beyond the binary logic which lies behind conventional textbooks, which teach student to think in terms of whether theories are true or false, or even instrumental – enabling you to formulate policy and welfare questions. In Meta-Theory, we try to step back and ask questions about who created this theory, in what historical context, which groups did it help, and which did it hurt, and what will the effects be upon us and upon the world, if we decide to affirm these theories for use in our personal lives, and to shape our societies?
The Hill and Myatt Anti-Text is ideally suited to this goal, since it is directly a meta-analysis of the message contained in conventional textbooks, and brings out the implications hidden beneath the surface of the analysis. In particular, the Anti-Text helps us to understand the rhetorical strategy used by conventional textbooks to convince students of theories which are overwhelmingly contradicted by empirical evidence.
BTW, it is worth pausing here to admire the efficiency with which economists succeed in creating such deep brainwashing that mountains of empirical evidence fail to move the faith of the true believers. For Real-World economists, it is very important to study these rhetorical strategies, as exposing this framework is an important component of the De-Programming techniques which are required to reverse the effects of this brainwashing.
Getting back to AM03, the two central META-Questions that we focus on are the following:
1. What are the rhetorical strategies used by conventional textbooks to create the dramatically false and misleading belief that Supply and Demand framework is universally applicable in terms of understanding how markets work?
2. WHY do textbooks want students to believe in Supply and Demand, when this theory is easily proven false?
The methodology used in the lecture is to create a VERY SIMPLE model of a monopoly, one which can be easily understood directly and intuitively by students. We talk about an ice-cream seller in a Public Park, who is the sole vendor of ice-cream. He purchases his ice-cream wholesale at PKR 25, and can sell it for various prices, but the demand will be reduced if he charges higher prices. The rhetorical point in using such models is that conventional textbooks, of necessity, work in IMAGINARY worlds, which cannot be understood intuitively – this is because the theories that they are trying to sell to the students are patently false in any real world context and scenario. That is why, we build on strengths by using REAL WORLD examples to oppose the IMAGINARY scenarios of conventional textbooks. Setting up and examining this example leads to the easy realization of the following key points which we try to convey to the students in this lecture:
1. The main issues facing the monopolist is a large amount of uncertainty and random fluctuations in the demand for ice-cream. The idea of MAXIMIZATION, which requires a KNOWN demand function, simply is not possible for the ice cream vendor. How can he possibly know or learn the demand function, which is dependent on vagaries of the weather, and people? INSTEAD, rule and heuristic based behavior is the only realistic possibility; this possibility is what Agent Based Models implement. (Striking a blow against the MAXIMIZATION idea).
2. Nonetheless, we play along with the conventional textbooks and pretend that somehow the demand function is fixed and known. Then it is clear that the monopolist has the power to set prices, and hence does not have a SUPPLY function. Furthermore, the profit maximization equilibrium does not maximize social welfare. It is clear that REGULATING prices, setting a ceiling price that he can charge, will actually improve social welfare, making more ice-cream available to larger number of customers at cheaper prices, while still allowing fair profits to the monopolist.
The above lessons emerge in an example that is easily understood directly and intuitively by students. Next we focus on the rhetorical aspects.
1. Given that S&D does not hold in monopolies, and more generally in any markets where firm set prices, why is the model given such prominence in textbooks? Before proceeding to apply S&D analysis, should we not ask the question as to whether or not firms have market power, so that we can be sure that there is a supply curve to analyze?
2. The policy implication that setting a price ceiling below observed market equilibrium will lead to shortages is based on S&D and exactly the opposite is true in monopolistic markets. Before dogmatically opposing prices ceilings, should we not try to find out the extent to which perfect competition applies in these markets?
Hill and Myatt also examine the rhetorical strategy used by textbooks to convince students of universal applicability of Supply and Demand. The S&D model is introduced very early, and used throughout the textbook. Problems with and exceptions are mentioned very late in the textbook, and the qualifications required to use S&D never mentioned. Thus the students get the impression that S&D is universally applicable, even though the model works only under conditions of perfect competition. The next lecture AM04 examines a duopoly model in detail and shows that the assumption of full information and zero transactions costs are ESSENTIAL to supply and demand – slight violations, with less than full information, and more than zero transaction costs – lead to complete breakdown of supply and demand. Thus a very fragile special case is presented as the central model for analysis of markets. WHY? Because this is the ONLY case in which markets work well without regulation. IN ALL OTHER CASES, markets require regulation and government interventions improve social welfare. Since the GOAL of textbooks is to prove efficiency of markets and to prove that government interventions are always harmful, they have no choice but to present S&D as the sole model with universal applicability.

For the 90 minute Video-Lecture, together with a 2500 outline and summary, see:

AM03: Monopoly