Welcome to the WEA Pedagogy Blog

Founded in May 2011
More than 12,000 members worldwide

Advertisements

WTO, price crops and global hunger

Current food challenges involve issues ranging from land and food access to commodity price volatility, besides national and international regulation. Although the scope and intensity of these challenges vary according to the different economic and social situations of countries, the debate has been global.

Today, once again, these issues arise deep concerns on behalf of the 2017 WTO ministerial conference  that has just been closed, in Buenos Aires, Argentina. Indeed, the WTO has not seemed to enhance effective actions on long-standing proposals. Agriculture negotiations remain among the most important and challenging issues. These negotiations began in 2000 as part of the mandated “built-in agenda” agreed at the end of the 1986-1994 Uruguay Round and, then, they were incorporated into the Doha Round launched at the end of 2001.

The process of globalization of capital in agriculture and food production has shaped a global network of institutions that supplies the worldwide food markets. Contract farming and integrated supply chains are deeply transforming the structure of the agriculture and food industries and, as a result, they have put the local farm sector under high pressure. Further, the expansion of big investment projects, led by transnational companies and institutional investors, has expose small farmers to a situation of hunger and food insecurity by expelling them from the land where they live. In addition to these challenges, the biotech revolution and the introduction of genetically improved varieties of seeds have fostered structural changes.

While the agriculture and food systemic changes are linked to financial and trade flows – mainly profit-driven – international organizations and non-governmental organizations have shaped hunger reduction projects. More recently, for example poverty and hunger reduction targets have been included in the Millennium Development Goals (MDGs) of the United Nations Development Program (UNDP). In truth, hunger and poverty are correlated issues. They are primarily linked to land access, income distribution, employment and food prices, among other factors.

In this scenario, even with the global financial crisis, international prices for agricultural commodities remained substantially above historical averages. Some factors contributed to these high prices:  growth of the world’s population, growth of the Chinese GDP and the urbanization of China. As a result,  at the end of the 2000s, the FAO predicted the global challenge of “a decade of high food prices” and pointed out the need to increase food production.

Since 2014, global commodity crop prices have come back to pre-food-crisis levels. Indeed, the pre-crisis rising food prices turned out to draw investment into agriculture, mainly in the U.S., Brazil, Argentina, Ukraine, and other exporters of commodity crops, such as corn and soybeans. However, according to the Institute for Agriculture and Trade Policy (IATP), the American exports of corn, soybeans, wheat and cotton at prices has been characterized by significant “dumping margins”.

What seems relevant to recall is that the financialization of cop prices and their volatility are systemic challenges. On behalf of these challenges, there has been a global increase not only in the vulnerability of small farmers but also in the number of chronically hungry people – that amounts more than 800 million. Considering this background, after a decade of high prices, current low crop prices and dumped crops – without effective WTO proposals and actions – will drive the most vulnerable people even more into hunger and poverty.

References

FAO. The future of food and agriculture – Trends and challenges. 2017. Rome.

Institute for Agriculture and Trade Policy. Excessive Speculation in Agriculture Commodities: Selected Writings from 2008–2011. Ben Lilliston and Andrew Ranallo (Editors). IATP, 2011. Available on line at: http://www19.iadb.org/intal/intalcdi/PE/2011/08247.pdf.  Accessed 29 July 2016.

United Nations. The Millennium Development Goals Report 2012. Available on line at: http://www.un.org/millenniumgoals/pdf/MDG%20Report%202012.pdf. Accessed 20 April 2016.

WTO. 2017 Ministerial Conference. Agriculture. https://www.wto.org/english/thewto_e/minist_e/mc11_e/briefing_notes_e/bfagric_e.htm

Choosing the Right Regressors

Talk at PIDE Nurturing Minds Seminar on 29th Nov 2017. Based on “Lessons in Econometric Methodology: Axiom of Correct Specification”, International Econometric Review, Vol 9, Issue 2.

Modern econometrics is based on logical positivist foundations, and looks for patterns in the data. This nominalist approach is seriously deficient, as I have pointed out in Methodological Mistakes and Econometric Consequences. These methodological defects are reflected in sloppy practices, which result in huge numbers of misleading and deceptive regression results — nonesense or meaningless regressions. The paper and talk below deals with one very simple issue regarding choice of regressors which is not explained clearly in textbooks and leads to serious mistakes in applied econometrics papers.

BRIEF SUMMARY OF TALK/PAPER:

Conventional econometric methodology, as taught in textbooks, creates serious misunderstandings about applied econometrics. Econometricians try out various models, select one according to different criteria, and then interpret the results. The significance of the fact that interpretations are only valid if the model is CORRECT are not highlighted in textbooks. The result is that everyone presents and interprets their models as if the model was correct. This relaxed assumption – that we can assume correct any model that we put down on paper, subject to minor checks like high R-squared and significant t-stats – leads to dramatically defective inferences. In particular, ten different authors may present 10 different specifications for the same variable, and each may provide an interpretation based on the assumption that his model is correctly specified. What is not realized is that there is only one correct specification, which must include all the determinants as regressor, and also exclude all irrelevant variables (though this is not so important). This means that out of millions of regressions based on different possible choices of regressors, only one is correct, while all the rest are wrong. Thus all 10 authors with 10 different specifications cannot be right – at most one of them can be right. In this particular case, we could see at least 90% of the authors are wrong. This generally applies to models published in journals – the vast majority of different specification must be wrong.

Now the question arises as to how much difference this Axiom of Correct Specification makes. If we can get approximately correct results, then perhaps the current relaxed methodology is good enough as a beginning point. Here the talk/paper demonstrates that if one major variable is omitted from the regression model, than anything can happen. Typically, completely meaningless regressors will appear to be significant. For instance, if we regress the consumption of Australia on the GDP of China, we find a very strong regression relationship with R-squared above 90%. Does this means that China’s GDP determines 90% of the variation in Australian consumption. Absolutely not. This is a nonsense regression, also known as a spurious regression. The nonsense regression is cause by the OMISSION of an important variable – namely Australian GDP, which is the primary determinant of Australian Consumption. A major and important assertion of the paper is that the idea that nonsense regressions are caused by INTEGRATED regressors is wrong. This means that the whole theory of integration and co-integration, developed to resolve the problem of nonsense regression, is searching for solutions in the wrong direction. If we focus on solving the problem of selecting the right regressors – ensuring inclusion of all major determinants – then we can resolve the problem of nonsense or meaningless regressions.

Next we discuss how we can ensure the inclusion of all major determinants in the regression equation. Several strategies currently in use are discussed and rejected. One of these is Leamer’s strategy of extreme bounds analysis, and some variants of it. These do not work in terms of finding the right regressors. Bayesian strategies are also discussed. These work very well in the context of forecasting, by using a large collection of models which have high probabilities of being right. This works by diversifying risk – instead of betting on any one model to be correct, we look at a large collection. However, it does not work well for identifying the one true model that we are looking for.
The best strategy currently in existence for finding the right regressors is the General-to-Simple modeling strategy of David Hendry. This is the opposite of standard simple-to-general strategy advocated and used in conventional econometric methodology. There are several complications in applying this strategy, which make it difficult to apply. It is because of these complications that this strategy was considered and rejected by econometrician. For one thing, if we include a large number of regressors, as GeTS required, multicollinearities emerge which make all of our estimates extremely imprecise. Hendry’s methodology has resolved these, and many other difficulties, which arise upon estimation of very large models. This methodology has been implemented in Autometrics package within the PC-GIVE software for econometrics. This is the state-of-the-art in terms of automatic model selection, based purely on statistical properties. However, it is well established that human guidance, where importance of variables is decided by human judgment about real-world causal factors, can substantially improve upon automatic procedures. It is very possible, and happens often in real world data sets, that a regressor which is statistically inferior, but is known to be relevant from either empirical or theoretical considerations, will outperform a statistically superior regressor, which does not make sense from a theoretical perspective. A 70m video-lecture on YouTube is linked below. PPT Slides for the talk, which provide a convenient outline, are available from SlideShare: Choosing the Right Regressors. The paper itself can be downloaded from “Lessons in Econometric Methodology: The Axiom of Correct Specification

 

For this post on my personal website: asadzaman.net, see: http://bit.do/azreg

For my lectures & courses on econometrics, see: https://asadzaman.net/econometrics-2/

For a list of my papers on Econometrics, see: Econometrics Publications

 

 

Subjective Probability Does Not Exist

The title is an inversion of De-Finetti’s famous statement that “Probability does not exist” with which he opens his famous treatise on Probability. My paper, discussed below, shows that the arguments used to establish the existence of subjective probabilities, offered as a substitute for frequentist probabilities, are flawed.

The existence of subjective probability is established via arguments based on coherent choice over lotteries. Such arguments were made by Ramsey, De-Finetti, Savage and others, and rely on variants of the Dutch-Book, which show that incoherent choices are irrational – they lead to certain loss of money. So every rational person must make coherent choices over a certain set of especially constructed lotteries. Then the subjectivist argument shows that every coherent set of choices corresponds to a subjective probability on part of the decision maker. Thus we conclude that rational decision makers must have subjective probabilities. This paper shows that coherent choice over lotteries leads to weaker conclusion than the one desired by subjectivists. If a person is forced to make coherent choices for the sake of consistency in certain specially designed environment, that does not “reveal” his beliefs. The decision may arbitrarily chose a “belief”, which he may later renounce. To put this in very simple terms, suppose you are offered a choice between exotic fruits Ackee and Rambutan, neither of which you have tasted. Then the choice you make will not “reveal” your preference. But preferences are needed to ensure stability of this choice, which allows us to carry it over into other decision making environments.

The distinction between “making coherent choices which are consistent with quantifiable subjective probabilities” and actually having beliefs in these subjective probabilities was ignored in era of dominance of logical positivism, when the subjective probability theories were formulated. Logical positivism encouraged the replacement of unobservables in scientific theories by their observational equivalents. Thus unobservable beliefs were replaced by observable actions according to these beliefs. This same positivist impulse led to revealed preference theory in economics, where unobservable preferences of the heart were replaced by observable choices over goods. It also led to the creation of behavioral psychology where unobservable mental states were replaced by observable behaviors.

Later in the twentieth century, Logical Positivism collapsed when it was discovered that this equivalence could not be sustained. Unobservable entities could not be replaced by observable equivalents. This should have led to a re-thinking and re-formulation of the foundations of subjective probability, but this has not been done. Many successful critiques have been mounted against subjective probability. One of them (Uncertainty Aversion) is based on the Ellsberg Paradox, which shows that human behavior does not conform to the coherence axioms which lead to existence of subjective probability. A second line of approach, via Kyburg and followers, derive flawed consequences from the assumption of existence of subjective probabilities. To the best of my knowledge, no one has directly provided a critique of the foundational Dutch Book arguments of Ramsey, De-Finetti, and Savage. My paper entitled “Subjective Probability Does Not Exist” provides such a critique. A one-hour talk on the subject is linked below. The argument in a nutshell is also given below.

The MAIN ARGUMENT in a NUTSHELL:

Magicians often “force a card” on a unsuspecting victim — he thinks he is making a free choice, when in fact the card chosen is one that has been planted. Similarly, subjectivists force you to create subjective probabilities for uncertain events E, even when you avow lack of knowledge of this probability. The trick is done as follows.  I introduce two lotteries. L1 pays $100 if event E happens, while lottery L2 pays $100 if E does not happen. Which one will you choose? If you don’t make a choice, you are a sure loser, and this is irrational. If you choose L1, then you reveal a subjective probability P(E) greater than or equal to 50%. If you choose L2, then you reveal a subjective probability P(E) less than or equal to 50%. Either way, you are trapped. Rational choice over lotteries ensures that you have subjective probabilities. There is something very strange about this argument, since I have not even specified what the event E is. How can I have subjective probabilities about an event E, when I don’t even know what the event E is? If you can see through the trick, bravo for you! Otherwise, read the paper or watch the video.. What is amazing is how many people this simple sleight-of-hand has taken in. The number of people who have been deceived by this defective argument is legion. One very important consequence of widespread acceptance of this argument was the removal of uncertainty from the world. If rationality allows us to assign subjective probabilities to all uncertain events, than we only face situations of RISK (with quantifiable and probabilistic uncertainty) rather then genuine uncertainty where we have no idea what might happen. Black Swans were removed from the picture.

Blanchard and Summers: Back to the future?

Olivier Blanchard and Lawrence Summers has recently called for a reflection about the macroeconomic tools required to manage the outcomes of the 2008 global crisis  in their paper Rethinking Stabilization Policy. Back to the Future. The relevant question they address is: Should the crisis lead to a rethinking of both macroeconomics and macroeconomic policy similar to what we saw in the 1930s or in the 1970s? In other words, should the crisis lead to a Keynesian approach to macroeconomic policy or will it reinforce the agenda suggested by mainstream macroeconomics since the 1990s?

Since the 1990s, mainstream macroeconomics has largely converged on a view of economic fluctuations that has become the basic paradigm of research and macroeconomic policy. According to this view of unexplained random underlying shocks, the fluctuations result from small shocks to components of demand and supply with linear propagation mechanisms which do not prevent the economy to return back to the potential output trend.  Considering a world of regular fluctuations:  (1) dynamic stochastic general equilibrium (DSGE) models are used to develop structural interpretations to the observed dynamics, (2) optimal policy is mainly based on monetary feedback rules- such as the interest rate rule- while fiscal policy is avoided as a stabilization tool, (3) the role of the finance is often centered on the yield curve, and (4) macro prudential policies are not considered.

As the real-world of financial crisis does not fit this representation of fluctuations, Blanchard and Summers, following the influence of Romer’s reference of the DSGE regular shocks as phlogistons, assess that the image of financial crises should be “more of plate tectonics and earthquakes, than of regular random shocks”. And this happens for a number of reasons (1) financial crises are characterized by non-linearities that amplify shocks (for instance, bank runs), (2) one of the outcomes of financial crises is a long period of depressed output followed by a permanent decrease in potential output relative to trend as the  propagation mechanisms  do not converge to the potential output trend,  (3) financial crises are followed by “hysteresis” either through higher unemployment or  lower productivity,  .

Almost ten years after the 2008 crisis, among the current “non-linearities” that led to the current deep policy challenges, Blanchard and Summers also highlight

  • The large and negative output gaps in many advanced economies,  in addition to low growth, low inflation, low nominal interest rate, reduction of nominal wages in advanced economies,
  • The interaction between public debt and the banking system, a mechanism known as “doom loops” since higher public debt might lead to public debt restructuring that might turn out to decrease the level of banks’ capital and, therefore, this situation might increase concerns about their liquidity and solvency.

Considering the current policy challenges, they suggest to avoid the return to the pre-crisis agenda or even to avoid the adoption of what they call “more dramatic proposals, from helicopter money, to the nationalization of the financial system”. In their view, there is the need to use macro policy tools to reduce risks and stabilize adverse shocks. As a result, they suggest:

  • A more aggressive monetary policy, providing liquidity when needed.
  • A more active use of fiscal policy as a stabilization macroeconomic tool, besides a more relaxed behavior in relation to fiscal debt consolidation.
  • A more active financial regulation.

Interesting to say that Blanchard and Summers mention the importance of Hyman Minsky in warning the special role of the complexity of finance in contemporary capitalism. However, in the defense of their proposal, they should have remembered the Minskyan concern:  Who will benefit from this policy agenda?

Any policy agenda  refers to forms of power: there are tensions between private money, consenting financial practices and national targets that emerge in the context of  the neoliberal global governance rules.

Indeed, almost ten years after the 2008 global financial crisis, it is time to rethink the contemporary political, social and economic challenges in a broader context and in a broader and longer perspective.  Power, finance and global governance are poweful interrelated issues that shape livelihoods.

AM05 Consumer Theory

Lecture 5 of Advanced Microeconomics at PIDE. The base for this lecture Hill & Myatt Anti-Textbook Chapter 4 on Consumer Theory.

Hill and Myatt cover three criticisms of conventional microeconomic consumer theory.

  1. Economic theory considers preference formation as exogenous. If the production process also creates preferences via advertising, this is not legitimate.
  2. Consumers are supposed to make informed choices leading to increase welfare. However, deceptive advertising often leads consumers to make choices harmful to themselves. The full information condition assumed by Economics is not valid.
  3. Economic theory is based on methodological individualism, and treats all individual separately. However, many of our preferences are defined within a social context, which cannot be neglected.

Before discussing modern consumer theory, it is useful to provide some context and

1      Historical Background:

In a deeply insightful remark, Karl Marx said that Capitalism works not just by enslaving laborers to produce wealth for capitalists, but by making them believe in the necessity and justness of their own enslavement. The physical and observable chains tying the exploited are supplemented by the invisible chains of theories which are designed to sustain and justify existing relationships of power. Modern economic consumer theory is an excellent illustration of these remarks. Continue reading AM05 Consumer Theory

The Shifting Battleground

The bull charges the red flag being waved by the matador, and is killed because he makes a mistake in recognizing the enemy.  A standard strategy of the ultra-rich throughout the ages has been to convince the masses that their real enemy lies elsewhere. Most recently, Samuel Huntington created a red flag when he painted the civilization of Islam as the new enemy, as no nation was formidable enough to be useful as an imaginary foe to scare the public with. Trillions of dollars have since been spent in fighting this enemy, created to distract attention from the real enemy.

The financial deregulation initiated in the Reagan-Thatcher era in the 1980s was supposed to create prosperity. In fact, it has resulted in a sky-rocketing rise in inequality. The gap between the richest and the poorest has become larger than ever witnessed in history. Countless academic articles and books have been written to document, explain and attempt to provide solutions to the dramatic increase in inequality. The American public does not need these sophisticated data and theories; it experiences the fact, documented in The Wall Street Journal, that the quality of jobs and wage earnings are lower today than they were in the 1970s. Growing public awareness is reflected in several movies about inequality. For instance, Elysium depicts a world where the super-rich have abandoned the ruined surface of the planet Earth to the proles, and live in luxury on a satellite.

The fundamental cause of growing inequality is financial liberalisation. Just before the Great Depression of 1929, private banks gambled wildly with depositors’ money, leading to inflated stocks and real estate prices. Following the collapse of 1929, the government put stringent regulations on banking. In particular, the Glass-Steagall Act prohibited banks from speculating in stocks. As a result, there were few bank failures, and widespread prosperity in Europe and the US in the next 50 years. Statistics show that the wealth shares of the bottom 90 per cent increased, while that of the top 0.1 per cent decreased until 1980. To counteract this decline, the wealthy elite staged a counter-revolution in the 1980s, to remove restrictive banking regulations.

As a first step, Reagan deregulated the Savings and Loan (S&L) Industry in the Garn-St Germain Act of 1982. He stated that this was the first step in a comprehensive programme of financial deregulation, which would create more jobs, more housing and new growth in the economy. In fact, what happened was a repeat of the Great Depression. The S&L industry took advantage of the deregulation to gamble wildly with the depositors’ money, leading to a crisis which cost $130 billion to the taxpayers. As usual, the bottom 90 per cent paid the costs, while the top 0.1 per cent enjoyed a free ride. What is even more significant is the way this crisis has been written out of the hagiographies of Reagan, and erased from public memory. This forgetfulness was essential to continue the programme of financial deregulation which culminated with the repeal of the Glass-Steagall Act, and the enactment of the Financial Modernization Act in 2000. Very predictably, the financial industry took advantage of the deregulation to create highly complex mortgage-based financial instruments worth trillions, but with hidden risks. A compliant ratings industry gave these instruments fraudulent AAA rating, in order to sell them to unsuspecting investors. It did not take long for the whole system to crash in the Global Financial Crisis (GFC) of 2008.

Unlike the Great Depression of 1929, the wealthy elite were fully prepared for the GFC 2008. The aftermath was carefully managed to ensure that restrictive regulations would not be enacted. As part of the preparation, small media firms were bought out, creating a heavily concentrated media industry, limiting diversity and dissent. Media control permitted shaping of public opinion to prevent the natural solution to the mortgage crisis being implemented, which would have been to bail out the delinquent mortgagors. Princeton economists Atif Mian and Amir Sufi have shown that this would have been a far more effective and cheaper solution. Instead, a no-questions-asked trillion dollar bailout was given to the financial institutions which had deliberately caused the disaster. Similarly, all attempts at regulation and reform were blocked in Congress. As a single example, the 300-page Dodd-Frank Act was enacted as a replacement for the 30-page Glass-Steagall Act. As noted by experts, any competent lawyer can drive a truck through the many loopholes deliberately created in this complex document. This is in perfect conformity with the finding of political scientists Martin Gilens and Benjamin Page that in the past few decades, on any issue where the public interest conflicts with that of the super-rich, Congress acts in favour of the tiny minority, and against public interest. Nobel Laureate Robert Shiller, who was unique in predicting the GFC 2008, has said recently that we have not learnt our lesson from the crisis, and new stock market bubbles are building up. A new crash may be on the horizon.

While billions sink ever deeper into poverty, new billionaires are being created at an astonishing rate, all over the globe — in India, China, Brazil, Russia, Nigeria, etc. Nations have become irrelevant as billionaires have renounced national allegiances and decided to live in small comfortable enclaves, like the Elysium. They are now prepared to colonise the bottom 90 per cent even in their own countries. The tool of enslavement is no longer armies, but debt — both at the individual and national levels. Students in the US have acquired trillion-plus dollars of debt to pay for degrees, and will slave lifetimes away, working for the wealthy who extended this debt. Similarly, indebted nations lose control of their policies to the IMF. For example, ex-Nigerian president Olusegun Obasanto said that “we had borrowed only about $5 billion up to 1985. Since then we have paid $16 billion, but $28 billion still remains in interest on the original debt.”

Like the gigantic and powerful bull, each pass through a financial crisis wounds the bottom 90 per cent by putting them deeper in debt, while strengthening the matador of the top 0.1 per cent. Sometimes, the bull can surprise the matador by a sudden shift at the last moment. On this thrilling possibility hangs the outcome of the next financial crisis: the masses achieve freedom from debt slavery, or the top 0.1 per cent succeeds in its bid to buy the planet, and the rest of us, with its wealth.

Completing the Circle: From GD ’29 to GFC ’07

Karl Marx said that “The advance of capitalist production develops a working class which by education, tradition and habit looks upon the requirements of that mode of production as self-evident natural laws.” Modern economic theory is a tool of central importance in making the laborers and the poor accept their own exploitation as natural and necessary. As explained in greater detail in the next lecture (AM09), Economic Theory argues that distribution of income is

  • FAIR – everyone gets what they deserve, in proportion to what they contribute (the marginal product)
  • NECESSARY – the laws of economics ensure that this is the only distribution which will prevail in equilibrium
  • EFFICIENT – this distribution creates efficient outcomes, and maximal productivity in the economic system.

In fact, as I have argued elsewhere, neoclassical Economic Theory should be labeled as ET1% (Economic Theory of the Top 1%), because it only represents their interests, and glosses over issues of central importance and concern to bottom 90%. Nonetheless, widespread propagation of this theory through university courses, and popular expositions for the general public, are very important in convincing the bottom 90% that the capitalist economic system is the best possible, and their own misfortunes are due to their own bad luck or other defects.

1      Classical Economic Theory

According to classical economic theory, free markets automatically eliminate unemployment, guaranteeing jobs for everyone at a fair wage, consonant with the productivity of labor. In particular, payoff to labor and to capital is perfectly symmetric – both factors get what they deserve. If government tries to regulate the labor market to create better outcomes – minimum wages, better working conditions, labor unions, etc. — it will actually end up hurting laborers. Economists argue that unemployment is due to minimum wage laws, labor unions, and search costs, and not due to free markets themselves.

2      Credit Creation By Banks

Although this is denied by conventional textbooks, banks create money when they make loans. Thus, outstanding credits which banks extend are always greater than their cash reserves (which accounts for the name “Fractional Reserve” banking system). Because bank profits are directly linked to the amount of credit they create, they are incentivized to maximize credit creation, and hence also to maximize the risks of a crisis when depositors panic and ask for money that the bank does not have in its possession. As detailed in “The Web of Debt” by Ellen Brown, financiers created artificial banking crises to scare the public into creating the Federal Reserve Bank in 1914, with the duty of bailing out banks in trouble, by extending them loans to cover their shortfalls. The FRB was created to prevent banking crises, but it actually led to biggest crisis of 20th century, the Great Depression of 1929 (GD ’29). With the FRB behind them, banks went on a credit creation spree, unconstrained by fears of potential crises. Credit creation is only possible when people want loans, and banks invented many different types of mechanisms to encourage people to borrow. They created “the American Dream” to create a consumer society, and instalment sales to sell loans for all sorts of consumer goods. They went further to encourage people to borrow in order to invest in stocks and land so that money can be made through speculation. This was the cause of roaring 1920’s, also known as the Gilded Age, when those with access to finance got very rich very fast.

3      The Great Depression

Like all artificial booms created by speculation, not backed by any real factor, the financial bubble burst in a stock market crash in 1929. The Great Depression was the worst economic crisis in American history, one that profoundly affected every area of American life and left psychic scars that still affect millions of families. With unemployment insurance nonexistent and public relief inadequate, the loss of a job meant economic catastrophe for workers and their families. By 1930, 4.2 million workers, 9 percent of the labor force, were out of work. Unemployment struck families by destroying the traditional role of the male breadwinner.

4      Two Revolutions

After Great Depression two revolutions took place of which first was regulation of financial industry and second was in economic theory.  Among the financial regulations, an important one was the Glass-Steagall act which prevents banks from speculation in stocks. Banks were prohibited to compete, and restricted to operate in one state only. The Chicago Plan to eliminate fractional reserve banking, and move to a 100% reserve system was also proposed and approved by a hundred and fifty economists of the time, but the financial lobby successfully blocked its passage.

The second revolution, in economic theory, was launched by Keynes. He said unemployment is not eliminated by free markets. So, the ideas of classical economic theory that supply and demand automatically eliminates the unemployment is wrong and government needs to intervene to get full employment. Keynes also punctured myths about money propagated by ET1%, namely the money is neutral, and has no real effects on the economy. This myth – that money is veil you must push aside in order to look at the workings of the real economy – is very useful to the 1% to hide the crucial role that money plays in funneling wealth to the rich, and in exploiting the poor.

5      Effect of Twin Revolution

Financial Regulations constrained the power of big money, and government policies to achieve full employment helped improve the lot of the bottom 90% substantially. The graph below shows how the share of the top 10% dropped drastically from 1940 to 1980 (start of the Reagan-Thatcher era). In the roaring twenties, the power of finance led to the rising share of top 0.1% creating the gilded age. Incidentally, it is important to note that the concept of GNP per capita systematically prevents us from looking into inequality because it takes all of the wealth that is produced in a country and distributes it equally among whole population. This is also part of ET1%, the systematic deception required to keep the bottom 90% content with its lot.

chart-01

Chart from The New Yorker: Piketty in Six Charts

After 1929, after the two revolutions took place, the share of bottom 90% started to rise and that of to 0.1% stared to go down. Top 0.1% were very unhappy from this state of affair and plotted a counter revolution. The master strategist, Milton Friedman, said that change can be created during a period of crisis (see The Shock Doctrine). So, the 1% prepared their theories and economic plans, and patiently waited for a shock. The Oil Crisis of 1970’s led to stagflation created the opportunity for Chicago school free market economists to discredit Keynesian theory. In fact, stagflation was due to cost-push inflation instead of demand-pull inflation, and Keynesian theory can easily be adapted to explain it. However, due to a large number of pre-planned and co-ordinated strategms on multiple fronts, Chicago School theories of free markets, as well as policies, became dominant after this crisis. (see Ideological Macroeconomics and Increasing Inequality.) As the graph shows, de-regulation of finance, and de-empowerment of labor lead to increasing wealth share of the rich, and declining share of the poor.

6      Consequences of Counter-Revolution

Due to counter-revolution in the 1970’s and 80’s, the distribution of wealth entirely changed. Only top 20% of the USA got 90% of total wealth, second 20% got 9.4% of the total wealth, third 20% have only 2.6% of total wealth in USA while the bottom 40% have -0.9% of wealth which means they are in debt actually with negative wealth. This is the income distribution that currently exist in USA after counter revolution by free market propagators.

Gradually, the effects of BOTH revolutions were reversed. The Quantity theory of money was re-implanted after its rejection and refutation by Keynes. The standard theory of labor currently being taught does not recognize the possibility of involuntary unemployment that Keynes introduced. (see The Keynesian Revolution and the Monetarist Counter-Revolution) Also, the Glass-Steagall act was repealed in 1999, and the Commodity Futures Modernization Act was passed in 2000. This gave an enormous amount of power to the financial lobby, creating unregulated arenas for their activities, and leading to the emergence of a vast “shadow” banking industry. The consequences were exactly the same as before – a spectacular crash only 8 years after the repeal of Glass-Steagall – the Global Financial Crisis of 2007.

So today we have gone around full circle, and stand exactly where we did a century ago, prior to GD ’28, with Pre-Keynesian economic theories about money and labor markets, and pre-Keynesian unregulated financial markets. However there are some important differences. The top 1% is MUCH better prepared this time around. They have blocked all attempts at financial reforms in Congress (unlike the aftermath of GD ’29. They have also battened down the hatches to prevent revolution in Economic Theory, and are using creating strategies to both protect neoclassical theory. Even more worrisome are their efforts create camps within heterodoxy (like INET, MMT, CORE Micro) which will create justifications for wealth even after rejecting neoclassical economics. Thing look much worse for the bottom 89% today.

A 22 minute video covering the ideas expressed above is linked below: