Archive

Uncategorized

Published in  The News: on June 5, 2008; in Pakistaniaat: A Journal for Pakistan Studies Vol 1, No 2. 2009;    Jakarta Post: From the Rubble of Modernization: on 11/11/2008 ; in Turkish Daily News on Nov. 30, 2007

 

Pride resulting from global dominance and spectacular scientific and technological developments led Europeans to believe that the West was the most advanced and developed of all societies. Other societies were primitive and under-developed. As these other societies matured and grew, they would follow the same stages that were followed by the West, and eventually become like modern Western societies. Early thinkers like Comte described the stages in growth from primitive society to modern ones in a ‘logical’ sequence. The enterprise of colonizing the non-European world was painted in bright terms as being part of the “White Man’s burden” of bringing enlightenment, good government, science, technology and other benefits of Western civilization to the rest of the world. Until the 60’s modernization theorists, like Parsons and Rostow echoed these sentiments, regarding Westernization as a desirable and inevitable process for the rest of the world. The goal of this article is to discuss some of the difficulties which led to substantial reconsideration of these naïve views. Current views (for example, Development as Freedom by Amartya Sen) are much more complex and diverse, and generally more respectful of other ancient civilizations in the world.

The first problem with the modernization theories is the deeply racist worldview embedded in them. The Dred-Scott decision in the USA declared that blacks were “beings of an inferior order, and altogether unfit to associate with the white race, either in social or political relations, and so far inferior that they had no rights which the white man was bound to respect.”  Australian aborigines were hunted like animals by the British. Lord Cecil Rhodes declared that “I contend that we are the finest race in the world and that the more of the world we inhabit the better it is for the human race. Just fancy those parts that are at present inhabited by the most despicable specimens of human beings; what an alteration there would be if they were brought under Anglo-Saxon influence … ”  He became the richest man in the world at the time by fully exploiting those ‘despicable specimens of human beings’ in the British colonies. Explicit and open racism had largely been abandoned, but is regaining strength and popularity. Additionally, it has morphed into less open but equally poisonous forms known as ‘modern racism’ or ‘symbolic racism’.

A second problem with modernization theories is that it has become abundantly clear that high sounding moral ideas have served as a cover for very low and despicable purposes. In King Leopold’s Ghost, Adam Hochschild documents the extremely cruel, oppressive and exploitative treatment meted out to Africans which resulted in the death of 4 to 8 million in the Belgian Congo alone. In the name of bringing them the benefits of European civilization, King Leopold’s officials used extremely harsh methods to force the locals to collect rubber. To teach the locals Western work ethics, the Belgians took wives and children hostage and kept them in subhuman conditions until their African husbands fulfilled their quotas. Soldiers would torture, chop off hands, or kill the inhabitants if they faltered in their work. All of these policies were promoted and advertised as Christian charity for the benefit of the natives.  Similar policies are also currently in operation. According to testimony of high-placed officials like Paul O’Neill, Alan Greenspan, and Henry Kissinger, the Iraq war was planned for the control of the vast oil resources of Iraq. However, the White House vehemently denies this view, and alleges high motives like the desire to bring democracy to Iraq. While every US soldier killed is counted, no one counts the millions of inferior lives destroyed by the Iraq war.  The vast amount of torture, arbitrary killings of civilians, destruction of Iraqi infrastructure and entire cities, and the resulting miseries of the populace, has surfaced in alternative media, but only occasionally breaks through to the mainstream media in USA.

A third problem with modernization theories is that they have failed to deliver results.  All across the world, “structural adjustment programs” (SAPs) were designed and implemented by expert economists to help improve economic performance. Even proponents from IMF and World Bank now widely acknowledge that these policies have been failures. Critics, including Nobel Laureate Stiglitz, claim that these SAP’s are a major cause of poverty all over the world. Under General Pinochet, the Chilean economy was turned into a laboratory experiment in free market economics by the “Chicago boys.” Advice from Nobel prize winning economist Milton Friedman followed strictly for several years resulted only in lackluster growth and continued high unemployment. Faith in the miracles of the free market led only to disappointment and failure when “shock treatment” was applied to the Russian economy. Pressure by US economists for financial liberalization led directly to the East Asian crisis. Throughout the world, numerous vigorously pursued programs for modernization and development along Western models have only led to chaos, cultural conflicts, and confusion.

The idea that Western models are perfect in all areas, including social, cultural and economic, leads to the dominant role of foreign expert advisors in development. These experts need to know nothing about local conditions, customs, traditions, because all of these are just obstacles in the path to progress. They come to a country knowing the solutions in advance, and give advice on how to move from existing patterns to Western ones in the shortest possible time. The havoc wrecked by this disregard and ignorance of local issues has been very well documented by Mitchell in The Rule of Experts. Studies of successful models for development (post-war Germany, Japan, communist Russia, East Asian Tigers) show that the strategies used there were often in oppositions to those recommended by conventional economics. World Bank economists writing about The East Asian Miracle admit that in most of these economies, the government intervened systematically, through multiple channels, to foster development. Despite these systematic violations of neoclassical prescriptions for development, these countries achieved the highest rates of productivity growth and fastest development seen at that time in the historical record.

Lessons from studies of successful development strategies are abundantly clear. Each such country has developed by disregarding foreign advice, and developing their own strategies. Self reliance, self confidence, trust, cooperation and methods adapted to local conditions and culture have been crucial to success. Slavish imitation of Western models and an inferiority complex are the biggest obstacles to progress. Cultural conflicts due to modernization, created by one segment of society opting for Western ways and another holding to traditions, have prevented the social harmony and unity necessary for progress.

Advertisements

On a global level, to achieve the 2˚C agreed upon during the Paris Agreement, a decrease in emissions of 40-70 percent (relative to 2010) should be obtained by 2050. According to Bloomberg New Energy Finance, 2017 global green investments exceeded 2016´s total of $287.5 billion. With strong government policy support, China has experienced a rapid increase in sustainable investments over the past several years and nowadays this country is the leader of global renewable investments. Besides, as of 2017, giant wind projects spread between the U.S., Mexico, U.K., Germany and Australia.

Considering the global market at the beginning of the 21st century, sustainable or green investments have gone through three stages—Envirotech, Cleantech and Sustaintech (2017 White Paper, Tsing Capital Strategy & Research Center).

First, the envirotech stage has been driven by environmental technology in addition to government policy and regulations. Envirotech investments have aimed to address traditional environmental issues, such as solid waste treatment, water treatmen and renewable energy.  Considering the related business models, the envirotech business has been characterized by capital intensive investments reliant on scaling up for competitive advantage.

After the emergence of the envirotech stage, cleantech investments have described those green investments driven by technological innovations and cost-reduction. Among other examples, we can highlight solar photovoltaics, electric vehicles, LEDs, batteries, semiconductors, and energy efficiency-related investments. The requirement of long research and development periods in cleantech business models has created high technical barriers for competitors.

The latest evolution of green investments has been defined as the sustaintech stage where digital and cloud based technologies are currently being applied to accelerate sustainable investments through the removal of environmental, energy and resource constraints. Venture Capital investments have successfully funded sustaintech companies  through the past several years, such as  Opower, Nest, Solarcity and Tesla. While Google acquired Nest for $3.2 billion in 2014, Oracle acquired Opower for $532 million in 2016.

Considering the new business models, sustaintech firms have shifted towards less capital intensive investments and the proliferation of disruptive technologies. Disruptive technologies are playing a key role in sustainable development such as the Internet of Things, Artificial Intelligence, Augmented Reality/Virtual Reality, Big Data, 3D Printing and Advanced Material.

  • Internet of Things sensor technology are enhancing sustainability with regards to energy efficiency, water resources and transportation.
  • Artificial Investment technology, satellite imagery, computational methods are being used to improve predictions to improve agriculture sustainability.
  • Virtual Reality and Augmented Reality (AR/VR) technologies have shown signs of potential to transform business processes in a wide range of industries.
  • Big Data has been oriented to optimize energy efficiency and to reduce the cost of clean technologies -related to solar panels and electric vehicles, for instance.
  • 3D Printing technology can improve resource efficiency in manufacturing and increase the use of green materials.
  • Advanced Materials technology can substitute non-renewable resources by recyclable and it can also enable efficiency in power devices

Indeed, $13.5 trillion in investments are needed in energy efficiency and low-carbon technologies up to 2030 to meet the Paris Agreement’s.  Considering the investment landscape, despite the new investment frontier, we are still far from this target. The case for climate action has never been stronger.

 

 

 

 

 

Talk at PIDE Nurturing Minds Seminar on 29th Nov 2017. Based on “Lessons in Econometric Methodology: Axiom of Correct Specification”, International Econometric Review, Vol 9, Issue 2.

Modern econometrics is based on logical positivist foundations, and looks for patterns in the data. This nominalist approach is seriously deficient, as I have pointed out in Methodological Mistakes and Econometric Consequences. These methodological defects are reflected in sloppy practices, which result in huge numbers of misleading and deceptive regression results — nonesense or meaningless regressions. The paper and talk below deals with one very simple issue regarding choice of regressors which is not explained clearly in textbooks and leads to serious mistakes in applied econometrics papers.

BRIEF SUMMARY OF TALK/PAPER:

Conventional econometric methodology, as taught in textbooks, creates serious misunderstandings about applied econometrics. Econometricians try out various models, select one according to different criteria, and then interpret the results. The significance of the fact that interpretations are only valid if the model is CORRECT are not highlighted in textbooks. The result is that everyone presents and interprets their models as if the model was correct. This relaxed assumption – that we can assume correct any model that we put down on paper, subject to minor checks like high R-squared and significant t-stats – leads to dramatically defective inferences. In particular, ten different authors may present 10 different specifications for the same variable, and each may provide an interpretation based on the assumption that his model is correctly specified. What is not realized is that there is only one correct specification, which must include all the determinants as regressor, and also exclude all irrelevant variables (though this is not so important). This means that out of millions of regressions based on different possible choices of regressors, only one is correct, while all the rest are wrong. Thus all 10 authors with 10 different specifications cannot be right – at most one of them can be right. In this particular case, we could see at least 90% of the authors are wrong. This generally applies to models published in journals – the vast majority of different specification must be wrong.

Now the question arises as to how much difference this Axiom of Correct Specification makes. If we can get approximately correct results, then perhaps the current relaxed methodology is good enough as a beginning point. Here the talk/paper demonstrates that if one major variable is omitted from the regression model, than anything can happen. Typically, completely meaningless regressors will appear to be significant. For instance, if we regress the consumption of Australia on the GDP of China, we find a very strong regression relationship with R-squared above 90%. Does this means that China’s GDP determines 90% of the variation in Australian consumption. Absolutely not. This is a nonsense regression, also known as a spurious regression. The nonsense regression is cause by the OMISSION of an important variable – namely Australian GDP, which is the primary determinant of Australian Consumption. A major and important assertion of the paper is that the idea that nonsense regressions are caused by INTEGRATED regressors is wrong. This means that the whole theory of integration and co-integration, developed to resolve the problem of nonsense regression, is searching for solutions in the wrong direction. If we focus on solving the problem of selecting the right regressors – ensuring inclusion of all major determinants – then we can resolve the problem of nonsense or meaningless regressions.

Next we discuss how we can ensure the inclusion of all major determinants in the regression equation. Several strategies currently in use are discussed and rejected. One of these is Leamer’s strategy of extreme bounds analysis, and some variants of it. These do not work in terms of finding the right regressors. Bayesian strategies are also discussed. These work very well in the context of forecasting, by using a large collection of models which have high probabilities of being right. This works by diversifying risk – instead of betting on any one model to be correct, we look at a large collection. However, it does not work well for identifying the one true model that we are looking for.
The best strategy currently in existence for finding the right regressors is the General-to-Simple modeling strategy of David Hendry. This is the opposite of standard simple-to-general strategy advocated and used in conventional econometric methodology. There are several complications in applying this strategy, which make it difficult to apply. It is because of these complications that this strategy was considered and rejected by econometrician. For one thing, if we include a large number of regressors, as GeTS required, multicollinearities emerge which make all of our estimates extremely imprecise. Hendry’s methodology has resolved these, and many other difficulties, which arise upon estimation of very large models. This methodology has been implemented in Autometrics package within the PC-GIVE software for econometrics. This is the state-of-the-art in terms of automatic model selection, based purely on statistical properties. However, it is well established that human guidance, where importance of variables is decided by human judgment about real-world causal factors, can substantially improve upon automatic procedures. It is very possible, and happens often in real world data sets, that a regressor which is statistically inferior, but is known to be relevant from either empirical or theoretical considerations, will outperform a statistically superior regressor, which does not make sense from a theoretical perspective. A 70m video-lecture on YouTube is linked below. PPT Slides for the talk, which provide a convenient outline, are available from SlideShare: Choosing the Right Regressors. The paper itself can be downloaded from “Lessons in Econometric Methodology: The Axiom of Correct Specification

 

For this post on my personal website: asadzaman.net, see: http://bit.do/azreg

For my lectures & courses on econometrics, see: https://asadzaman.net/econometrics-2/

For a list of my papers on Econometrics, see: Econometrics Publications

 

 

The title is an inversion of De-Finetti’s famous statement that “Probability does not exist” with which he opens his famous treatise on Probability. My paper, discussed below, shows that the arguments used to establish the existence of subjective probabilities, offered as a substitute for frequentist probabilities, are flawed.

The existence of subjective probability is established via arguments based on coherent choice over lotteries. Such arguments were made by Ramsey, De-Finetti, Savage and others, and rely on variants of the Dutch-Book, which show that incoherent choices are irrational – they lead to certain loss of money. So every rational person must make coherent choices over a certain set of especially constructed lotteries. Then the subjectivist argument shows that every coherent set of choices corresponds to a subjective probability on part of the decision maker. Thus we conclude that rational decision makers must have subjective probabilities. This paper shows that coherent choice over lotteries leads to weaker conclusion than the one desired by subjectivists. If a person is forced to make coherent choices for the sake of consistency in certain specially designed environment, that does not “reveal” his beliefs. The decision may arbitrarily chose a “belief”, which he may later renounce. To put this in very simple terms, suppose you are offered a choice between exotic fruits Ackee and Rambutan, neither of which you have tasted. Then the choice you make will not “reveal” your preference. But preferences are needed to ensure stability of this choice, which allows us to carry it over into other decision making environments.

The distinction between “making coherent choices which are consistent with quantifiable subjective probabilities” and actually having beliefs in these subjective probabilities was ignored in era of dominance of logical positivism, when the subjective probability theories were formulated. Logical positivism encouraged the replacement of unobservables in scientific theories by their observational equivalents. Thus unobservable beliefs were replaced by observable actions according to these beliefs. This same positivist impulse led to revealed preference theory in economics, where unobservable preferences of the heart were replaced by observable choices over goods. It also led to the creation of behavioral psychology where unobservable mental states were replaced by observable behaviors.

Later in the twentieth century, Logical Positivism collapsed when it was discovered that this equivalence could not be sustained. Unobservable entities could not be replaced by observable equivalents. This should have led to a re-thinking and re-formulation of the foundations of subjective probability, but this has not been done. Many successful critiques have been mounted against subjective probability. One of them (Uncertainty Aversion) is based on the Ellsberg Paradox, which shows that human behavior does not conform to the coherence axioms which lead to existence of subjective probability. A second line of approach, via Kyburg and followers, derive flawed consequences from the assumption of existence of subjective probabilities. To the best of my knowledge, no one has directly provided a critique of the foundational Dutch Book arguments of Ramsey, De-Finetti, and Savage. My paper entitled “Subjective Probability Does Not Exist” provides such a critique. A one-hour talk on the subject is linked below. The argument in a nutshell is also given below.

The MAIN ARGUMENT in a NUTSHELL:

Magicians often “force a card” on a unsuspecting victim — he thinks he is making a free choice, when in fact the card chosen is one that has been planted. Similarly, subjectivists force you to create subjective probabilities for uncertain events E, even when you avow lack of knowledge of this probability. The trick is done as follows.  I introduce two lotteries. L1 pays $100 if event E happens, while lottery L2 pays $100 if E does not happen. Which one will you choose? If you don’t make a choice, you are a sure loser, and this is irrational. If you choose L1, then you reveal a subjective probability P(E) greater than or equal to 50%. If you choose L2, then you reveal a subjective probability P(E) less than or equal to 50%. Either way, you are trapped. Rational choice over lotteries ensures that you have subjective probabilities. There is something very strange about this argument, since I have not even specified what the event E is. How can I have subjective probabilities about an event E, when I don’t even know what the event E is? If you can see through the trick, bravo for you! Otherwise, read the paper or watch the video.. What is amazing is how many people this simple sleight-of-hand has taken in. The number of people who have been deceived by this defective argument is legion. One very important consequence of widespread acceptance of this argument was the removal of uncertainty from the world. If rationality allows us to assign subjective probabilities to all uncertain events, than we only face situations of RISK (with quantifiable and probabilistic uncertainty) rather then genuine uncertainty where we have no idea what might happen. Black Swans were removed from the picture.

Olivier Blanchard and Lawrence Summers has recently called for a reflection about the macroeconomic tools required to manage the outcomes of the 2008 global crisis  in their paper Rethinking Stabilization Policy. Back to the Future. The relevant question they address is: Should the crisis lead to a rethinking of both macroeconomics and macroeconomic policy similar to what we saw in the 1930s or in the 1970s? In other words, should the crisis lead to a Keynesian approach to macroeconomic policy or will it reinforce the agenda suggested by mainstream macroeconomics since the 1990s?

Since the 1990s, mainstream macroeconomics has largely converged on a view of economic fluctuations that has become the basic paradigm of research and macroeconomic policy. According to this view of unexplained random underlying shocks, the fluctuations result from small shocks to components of demand and supply with linear propagation mechanisms which do not prevent the economy to return back to the potential output trend.  Considering a world of regular fluctuations:  (1) dynamic stochastic general equilibrium (DSGE) models are used to develop structural interpretations to the observed dynamics, (2) optimal policy is mainly based on monetary feedback rules- such as the interest rate rule- while fiscal policy is avoided as a stabilization tool, (3) the role of the finance is often centered on the yield curve, and (4) macro prudential policies are not considered.

As the real-world of financial crisis does not fit this representation of fluctuations, Blanchard and Summers, following the influence of Romer’s reference of the DSGE regular shocks as phlogistons, assess that the image of financial crises should be “more of plate tectonics and earthquakes, than of regular random shocks”. And this happens for a number of reasons (1) financial crises are characterized by non-linearities that amplify shocks (for instance, bank runs), (2) one of the outcomes of financial crises is a long period of depressed output followed by a permanent decrease in potential output relative to trend as the  propagation mechanisms  do not converge to the potential output trend,  (3) financial crises are followed by “hysteresis” either through higher unemployment or  lower productivity,  .

Almost ten years after the 2008 crisis, among the current “non-linearities” that led to the current deep policy challenges, Blanchard and Summers also highlight

  • The large and negative output gaps in many advanced economies,  in addition to low growth, low inflation, low nominal interest rate, reduction of nominal wages in advanced economies,
  • The interaction between public debt and the banking system, a mechanism known as “doom loops” since higher public debt might lead to public debt restructuring that might turn out to decrease the level of banks’ capital and, therefore, this situation might increase concerns about their liquidity and solvency.

Considering the current policy challenges, they suggest to avoid the return to the pre-crisis agenda or even to avoid the adoption of what they call “more dramatic proposals, from helicopter money, to the nationalization of the financial system”. In their view, there is the need to use macro policy tools to reduce risks and stabilize adverse shocks. As a result, they suggest:

  • A more aggressive monetary policy, providing liquidity when needed.
  • A more active use of fiscal policy as a stabilization macroeconomic tool, besides a more relaxed behavior in relation to fiscal debt consolidation.
  • A more active financial regulation.

Interesting to say that Blanchard and Summers mention the importance of Hyman Minsky in warning the special role of the complexity of finance in contemporary capitalism. However, in the defense of their proposal, they should have remembered the Minskyan concern:  Who will benefit from this policy agenda?

Any policy agenda  refers to forms of power: there are tensions between private money, consenting financial practices and national targets that emerge in the context of  the neoliberal global governance rules.

Indeed, almost ten years after the 2008 global financial crisis, it is time to rethink the contemporary political, social and economic challenges in a broader context and in a broader and longer perspective.  Power, finance and global governance are poweful interrelated issues that shape livelihoods.

Lecture 5 of Advanced Microeconomics at PIDE. The base for this lecture Hill & Myatt Anti-Textbook Chapter 4 on Consumer Theory.

Hill and Myatt cover three criticisms of conventional microeconomic consumer theory.

  1. Economic theory considers preference formation as exogenous. If the production process also creates preferences via advertising, this is not legitimate.
  2. Consumers are supposed to make informed choices leading to increase welfare. However, deceptive advertising often leads consumers to make choices harmful to themselves. The full information condition assumed by Economics is not valid.
  3. Economic theory is based on methodological individualism, and treats all individual separately. However, many of our preferences are defined within a social context, which cannot be neglected.

Before discussing modern consumer theory, it is useful to provide some context and

1      Historical Background:

In a deeply insightful remark, Karl Marx said that Capitalism works not just by enslaving laborers to produce wealth for capitalists, but by making them believe in the necessity and justness of their own enslavement. The physical and observable chains tying the exploited are supplemented by the invisible chains of theories which are designed to sustain and justify existing relationships of power. Modern economic consumer theory is an excellent illustration of these remarks. Read More

The bull charges the red flag being waved by the matador, and is killed because he makes a mistake in recognizing the enemy.  A standard strategy of the ultra-rich throughout the ages has been to convince the masses that their real enemy lies elsewhere. Most recently, Samuel Huntington created a red flag when he painted the civilization of Islam as the new enemy, as no nation was formidable enough to be useful as an imaginary foe to scare the public with. Trillions of dollars have since been spent in fighting this enemy, created to distract attention from the real enemy.

The financial deregulation initiated in the Reagan-Thatcher era in the 1980s was supposed to create prosperity. In fact, it has resulted in a sky-rocketing rise in inequality. The gap between the richest and the poorest has become larger than ever witnessed in history. Countless academic articles and books have been written to document, explain and attempt to provide solutions to the dramatic increase in inequality. The American public does not need these sophisticated data and theories; it experiences the fact, documented in The Wall Street Journal, that the quality of jobs and wage earnings are lower today than they were in the 1970s. Growing public awareness is reflected in several movies about inequality. For instance, Elysium depicts a world where the super-rich have abandoned the ruined surface of the planet Earth to the proles, and live in luxury on a satellite.

The fundamental cause of growing inequality is financial liberalisation. Just before the Great Depression of 1929, private banks gambled wildly with depositors’ money, leading to inflated stocks and real estate prices. Following the collapse of 1929, the government put stringent regulations on banking. In particular, the Glass-Steagall Act prohibited banks from speculating in stocks. As a result, there were few bank failures, and widespread prosperity in Europe and the US in the next 50 years. Statistics show that the wealth shares of the bottom 90 per cent increased, while that of the top 0.1 per cent decreased until 1980. To counteract this decline, the wealthy elite staged a counter-revolution in the 1980s, to remove restrictive banking regulations.

As a first step, Reagan deregulated the Savings and Loan (S&L) Industry in the Garn-St Germain Act of 1982. He stated that this was the first step in a comprehensive programme of financial deregulation, which would create more jobs, more housing and new growth in the economy. In fact, what happened was a repeat of the Great Depression. The S&L industry took advantage of the deregulation to gamble wildly with the depositors’ money, leading to a crisis which cost $130 billion to the taxpayers. As usual, the bottom 90 per cent paid the costs, while the top 0.1 per cent enjoyed a free ride. What is even more significant is the way this crisis has been written out of the hagiographies of Reagan, and erased from public memory. This forgetfulness was essential to continue the programme of financial deregulation which culminated with the repeal of the Glass-Steagall Act, and the enactment of the Financial Modernization Act in 2000. Very predictably, the financial industry took advantage of the deregulation to create highly complex mortgage-based financial instruments worth trillions, but with hidden risks. A compliant ratings industry gave these instruments fraudulent AAA rating, in order to sell them to unsuspecting investors. It did not take long for the whole system to crash in the Global Financial Crisis (GFC) of 2008.

Unlike the Great Depression of 1929, the wealthy elite were fully prepared for the GFC 2008. The aftermath was carefully managed to ensure that restrictive regulations would not be enacted. As part of the preparation, small media firms were bought out, creating a heavily concentrated media industry, limiting diversity and dissent. Media control permitted shaping of public opinion to prevent the natural solution to the mortgage crisis being implemented, which would have been to bail out the delinquent mortgagors. Princeton economists Atif Mian and Amir Sufi have shown that this would have been a far more effective and cheaper solution. Instead, a no-questions-asked trillion dollar bailout was given to the financial institutions which had deliberately caused the disaster. Similarly, all attempts at regulation and reform were blocked in Congress. As a single example, the 300-page Dodd-Frank Act was enacted as a replacement for the 30-page Glass-Steagall Act. As noted by experts, any competent lawyer can drive a truck through the many loopholes deliberately created in this complex document. This is in perfect conformity with the finding of political scientists Martin Gilens and Benjamin Page that in the past few decades, on any issue where the public interest conflicts with that of the super-rich, Congress acts in favour of the tiny minority, and against public interest. Nobel Laureate Robert Shiller, who was unique in predicting the GFC 2008, has said recently that we have not learnt our lesson from the crisis, and new stock market bubbles are building up. A new crash may be on the horizon.

While billions sink ever deeper into poverty, new billionaires are being created at an astonishing rate, all over the globe — in India, China, Brazil, Russia, Nigeria, etc. Nations have become irrelevant as billionaires have renounced national allegiances and decided to live in small comfortable enclaves, like the Elysium. They are now prepared to colonise the bottom 90 per cent even in their own countries. The tool of enslavement is no longer armies, but debt — both at the individual and national levels. Students in the US have acquired trillion-plus dollars of debt to pay for degrees, and will slave lifetimes away, working for the wealthy who extended this debt. Similarly, indebted nations lose control of their policies to the IMF. For example, ex-Nigerian president Olusegun Obasanto said that “we had borrowed only about $5 billion up to 1985. Since then we have paid $16 billion, but $28 billion still remains in interest on the original debt.”

Like the gigantic and powerful bull, each pass through a financial crisis wounds the bottom 90 per cent by putting them deeper in debt, while strengthening the matador of the top 0.1 per cent. Sometimes, the bull can surprise the matador by a sudden shift at the last moment. On this thrilling possibility hangs the outcome of the next financial crisis: the masses achieve freedom from debt slavery, or the top 0.1 per cent succeeds in its bid to buy the planet, and the rest of us, with its wealth.