Continued from previous post on Subjectivity Concealed in Index Numbers. Because modern epistemology rejects values as being just opinions, and only accepts facts as knowledge, values have be to disguised in the shape of facts. What better way to do this than by embodying them in cold hard and indisputable numbers? This post discussed how the GDP embodies the values of a market society.

poverty

From the sixteenth to the eighteenth century, the values of European societies were radically transformed by a complex combination of forces. Traditional social values, originating from Christianity, can be roughly summarized as follows:

  1. Community: All members are part of a common body, striving together for common goals.
  2. Social Responsibility: All members must take care of each other.
  3. Duties: Duty to society takes precedence over individual rights.

The transition to a market society led to a new creed, described by Tawney (1926) as: “The Industrial Revolution was merely the beginning of a revolution as extreme and radical as ever inflamed the minds of sectarians, but the new creed was utterly materialistic and believed that all human problems could be resolved given an unlimited amount of material commodities.” As Polanyi (1944) has explained, a central characteristic of market society is the creation of three artificial commodities: money, labor, and land. Labor is the stuff of human lives, and it is widely accepted that our lives cannot be bought or sold for money. But market societies require labor markets, where human lives are bought and sold. Similarly, land is our habitat, and the natural relationship is to protect, preserve, enhance, enrich our environment, known metaphorically as “Mother Earth”. However, market societies convert land and natural resources into saleable commodities. Finally, money is a third commodity which gets value from our social consensus as a means of artificially storing market values across time. The creation of these commodities leads to a set of values which are anti-thetical to traditional values. Operation of a labor market, where lives are for sale, requires reducing or removing social responsibility, so that the threat of hunger forces people to work. When human live are bought and sold, community ties automatically weaken. Market societies create individualism and hedonism, where pleasure is derived from possession and consumption of material goods, instead of social relationships. For a brief summary of Polanyi’s arguments, see Summary of the Great Transformation by Polanyi.

GDP as a measure of wealth captures the values of a market society.  Conceptually at least, the GDP aims to measure the total market value of the final products which are purchased by consumers. Intermediate goods, purchased by firms as inputs towards production of final consumer goods, are not counted. This expresses perfectly that values embedded in a market society. Only goods and services which are traded on the marketplace are given value. Services done out of love, or social responsibility, do not have any value. This excludes the vast majority of what matters in traditional societies – the most valuable things are the ones which are not available for sale. Furthermore, environmental resources, forest, lakes, plants and animals – these are no value, until they are traded on the market. The Amazon forest, millions of the year in the making, and irreproducible at any price, is evaluated in the GDP by the price of the furniture made by cutting down the trees for timber. For more extensive discussion, see Consumer Theory.

Statistics are the eyes of the state. Things are measured when they matter, and what is measured comes to matter. Throughout the world, policy and political decisions are guided by the numbers produced by the statistics department. The GDP is value laden in term of the choices it makes as the factors which are included, the factors which are excluded, and the weights assigned to the factors. The factors which are included differ in radical ways from those valued by traditional societies:

  1. Wasteful, ostentatious, and luxurious products are included, and valued at their sale prices. These products, such as Alligator skin briefcases for $20,000, would be regarded as being of negative value to society in a traditional society.
  2. Basic needs – food, housing, health, education – for the poor, are valued very low, because the poor cannot afford to pay much for these services. All traditional societies would value provision of these needs very highly, instead of at market value.
  3. Intangibles, such as community, social services and support provided by family and friends, are not sold in the market, and hence excluded from the GDP. Also, skills, experiences, knowledge, and capabilities of human beings, not for sale in the market, are evaluated at zero.
  4. Environment, natural resources, plant and animal species, and all the wonders of the planet that make our lives worth living, are assigned zero weights in the GDP measure.

The damage done by the use of GDP as a measure of wealth is deadly because it is hidden. Modern rhetoric is especially effective because it uses numbers to persuade, without any mention of the the values that went into the manufacture of these numbers. The devastating effects of market values promoted by GDP are only now becoming apparent. We list some of these harmful consequences below:

  1. Loss of Meaning in Lives: When the value of lives is measured in money, making money becomes the goal of life. This is an inherently meaningless activity, as money is only useful as a means to pursuit of higher goals. The Quran teaches us that human lives are infinitely precious, and cannot be evaluated in monetary terms.This message is aligned with the capabilities approach to development, which aims to enable human beings to develop their unique capabilities, and lead rich and fulfilling lives.
  2. Destruction of community and societies: The fabric of human lives is woven from our social relationships. However, a market society values human lives only for what they can produce and sell on the marketplace. This leads to destruction of communities, and loss of happiness, due to the illusion that happiness can be created by material possessions.
  3. Environmental Collapse: When natural resources are sold, the GDP records an increase, because the cost of depletion and destruction of environment is not taken into account. As many authors have noted, if we take these costs into account, the enormous growth recorded in the last century would be converted into an enormous loss. This is because the value of what has been produced is very small compared to the value of what has been destroyed in order to produce these goods. The strong drive for making short-term monetary gains by destroying planetary resources have led to the looming climate catastrophe.

If the market values embodied in the GDP were open, and available for discussion, most human being would disagree with the idea that these represent useful goals to strive for. But because they are concealed behind the rhetoric of objectivity, nearly all nations in the world emphasize the goal of increasing GDP growth, and governments rise and fall according to their ability to achieve growth targets. One of the reasons for this blind obsession with the wrong numbers, which embody anti-social values, is the specialization and fragmentation of knowledge that characterizes our times. Specialization leads to the separation of theory and practice. Statisticians specialize in manipulating the numbers, without knowledge of the real world origins of these numbers. It is the field specialist who understands the meaning of these numbers and uses the results from the statistical analysis. The statistician is supposed to do an objective analysis based purely on the numbers. Massively wrong analysis and policies result when everyone is doing his small piece of work, and no one has a global perspective.

This continues from the previous post on Lies, Damned Lies, and Statistics.

How_to_Lie_with_Statistics

More than 1.5 million copies sold, more than all other textbooks of statistics combined. Online copy

The vast majority of our life experience is built upon knowledge which cannot be reduced to numbers and facts. Our hopes, dreams, struggles, sacrifices, what we live for, and what we are ready to die for – none of these things can be quantified. However, as we have discussed, logical positivists said that what cannot be observed by our senses cannot be part of a scientific theory. As a result of this false idea, later disproven by philosophers, the attempt was made to measure everything – numbers were assigned to intelligence, trust, integrity, corruption, preferences, etc. – even though a long-standing tradition, as well as common intuition, tells us that these things are qualitative, and not measurable. Scientific progress was deeply and dramatically influenced by what I have called Lord Kelvin’s Blunder(2019): “When you can measure what you are speaking about, and express it in numbers, you know something about it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind”. This mindset needs to the lead to assign numbers and attempt to measure the unmeasurable. The harms that this has caused has been discussed in  Beyond numbers & material rewards and Corruption: Measuring the Unmeasurable.

In this note, we would like to focus on just one aspect of this attempt to measure what cannot be measured. The idea that we can take multiple measures of performance and reduce them to a single number is known as the index number problem. Very few people realize that this is inherently impossible – all such attempts must inevitably involve making subjective decisions regarding how the different measures should be combined. What is most often done in practice is that subjective decisions hidden in choice of measures, and associated weights are justified as objective. Because of this illusion of objectivity created by standard practice, most people are unaware that there are no objective solutions to the index number problem.

Impossibility of Combining Indicators: There is no objective way to combine two or more measures of performance to come up with a single number which measures of overall performance.

We will explain and illustrate by a few examples. We start with a familiar case where scores from two exams must be merged to create a single score for the course grade. Suppose that instructor Orhan has three students who received the following scores on the midterm and final in the course:

Student Midterm Final
Anil 50% 100%
Bera 100% 50%
Javed 80% 80%
Dawood 65% 90%

 

The teacher has interacted with all four students throughout the semester and has got a good idea of their capabilities, over and above what scores on the exams show. Suppose he thinks that Anil is the best among the four, and would like the classrom grades to reflect this opinion. As long as weight given to the Final is greater than 60%, Anil will have the highest score. On the other hand, he may know that Bera is a brilliant student who just had a bad day on the Final. Weights of more than 60% for the midterm would make Bera the top student. Equal weights would make Javed come out on top. A slight increase in weights for the final (45% MT, 65% Fin) would make Dawood the best student. So depending on the subjective decision of the teacher, he can choose weights to make any one of the four the top student. Furthermore, by assigning weights and calculating the score, the subjective opinion will look like an objective and impartial decision. The teacher could give apparently objective reasons for any choice of weights, by referring to the subjective factors of the length, difficulty, and scope of coverage of the exams as reasons for his weights.

Which one of the four is the best student? What surprises most students is that there is no objective answer to this question. As a mathematical theorem, it is impossible to summarize the information contained in two numbers by one numbers. Two dimensions cannot be reduced to one. Whenever we carry out such a reduction, we lose half of the information contained in two numbers. When there are multiple indicators, we lose even more information. Because there is no objective answer, the choice must be made on subjective grounds. This subjective element was the topic of rhetoric – the persuasive tactics used to argue that the final should carry more weight, or that the midterm should carry more weight, or even that some factor not considered, like attendance, should be taken into account. However, the philosophy of positivism teaches us that subjective judgments and personal opinions are of no value, and only the ‘facts’ should be considered. As a result, the subjective process of assigning weights, and choosing factors, must be concealed under an appearance of objectivity. This is the “hidden rhetoric” of statistics. Unlike pre-positivist rhetoric, this form is deadly because the unsuspecting victim only sees the numbers, and is told that you cannot argue with the facts. He does not even get to see the subjective elements which have gone into the manufacturing of the these numbers. Before proceeding to our main topic of GDP, we give one more common example of how statistics are use to create a false impression of objectivity in the context of rankings of universities.

One implication of the impossibility of objectively combining multiple indicators is that it is impossible to objectively rank products which have multiple dimensions of performance. This point is made very clearly, accurately, and forcefully by Gladwell (2011). We consider only one of his examples to illustrate. The interested reader is strongly encouraged to read the original article. We consider popular methods for coming up with a single number to rank universities. This is done by making numerical judgments according to several criteria and then combining them using subjectively chosen weights. As a specific example, suppose that Criterion A is Financial Resources available to the university per student, indicated by money spent on faculty salaries, libraries, and other academic infrastructure. Criterion B is the percentage of admitted students who graduate. Criterion C is selectivity (the percentage of applicants who are admitted from total number of applicants). Hypothetical numbers for the three criteria are given below.

A B C
Chicago 500 80% 10%
Stanford 1000 95% 5%
Penn State 100 90% 50%

 

Which of the three universities is the “best”? Malcom Gladwell (2011, The Order of Things) says that the question does not make sense, and it cannot be answered. The numbers and names used for illustration here are hypothetical, but plausible. Chicago is a private university which charges high fees and has a relatively easy admissions policy. It encourages fierce competition among students and selects the survivors, leading to a high dropout ratio. It invests substantial financial resources on faculty salaries and institutional overheads, providing high quality facilities and charging high fees. Stanford is an exclusive elitist university, where only a few students who are cream of the cream are admitted. The university is well equipped financially, and invests a huge amount on faculty and academic resources. Because all applicants are extremely good, nearly all complete their studies. Penn State is a large public sector university which aims to provide education to the masses. It has an easy enrollment policy, and helps and encourages all students to graduate, resulting in a low dropout rate. It invests less in resources to make education affordable for the masses, and has a high student/faculty ratio for this reason. Each of the universities has a different goal, and when evaluated with respect to its own goals, each of them is the best of the three. By choosing different weights for the different criteria, we can make the combined index come out to favor any one of the three universities as the best. There is no objective way to choose weights. In fact, it could be argued that each of the factors can be considered a virtue or a defect – it can receive negative or positive score – depending on our subjective point of view. The standard rankings assign a positive weight to financial resources, evaluating a university as higher ranking if it spends more. However, this factor is negatively correlated with affordability, which may be much more important to students, and would result in a reversal of the ranking by this factor. Similarly, there is an argument that we should try to carry along and educate all students so that high dropout rates are bad. However, we could also argue that rigorous competition leads to selection of the best students, and poor students are eliminated, resulting in the best graduates. Selectivity is good for the students who get in, but bad for the ones excluded by the process. How much weight to give each factor, which factors to consider, and whether the factor is considered as a plus or a minus, all of these are subjective decisions.

When evaluation is carried out in multiple dimensions, the choice of dimensions, weights attached to them, and whether they count as positive or negative factors are all subjective choices. However, because of the positivist philosophy of knowledge which is the basis of modern statistics, this subjectivity is concealed, so as to create an appearance of objectivity. In the rest of this article, we explore this subjectivity in the context of one of the most important and widespread measures of economic performance, namely the GDP per capita.

To be continued.

Dilbert

From Ancient Greece to the late 19th century, rhetoric played a central role in Western education in training orators, lawyers, counsellors, historians, statesmen, and poets. However the rise of empiricist and positivist thinking marginalized the role of rhetoric in 20th Century university education. Julie Reuben in “The Making of the Modern University: Intellectual Transformation and the Marginalization of Morality” writes about this change as follows:

“In the late nineteenth century intellectuals assumed that truth had spiritual, moral, and cognitive dimensions. By 1930, however, intellectuals had abandoned this broad conception of truth. They embraced, instead, a view of knowledge that drew a sharp distinction between “facts” and “values.” They associated cognitive truth with empirically verified knowledge and maintained that by this standard, moral values could not be validated as “true.” In the nomenclature of the twentieth century, only “science” constituted true knowledge.”

Once the positivist idea that knowledge consisted purely of facts and logic became dominant, persuasion became unnecessary. Anyone who knew the facts and applied logic would automatically come to the same conclusion. “Rhetoric” or persuasion was considered to be a means of deception by positivists – we could persuade people only by misrepresenting the facts or by abuse of logic. The foundations of statistics were constructed on the basis of positivist philosophy in the early twentieth century. Great emphasis was put on facts – represented by the numbers. Rhetoric (and values), represented by how the numbers are to be interpreted, was de-emphasized. This led to a tremendous rise in the importance of numbers, and their use as tools of persuasion. The rhetoric of the 20th Century was based on statistics, and data were used to present the facts, without any apparent subjectivity. As the popular saying goes, “you can’t argue with the numbers”.

By the middle of the 20th Century, logical positivism had a spectacular collapse. The idea that the objective and the subjective can be sharply separated was proven to be wrong. For a recent discussion of this, see Hilary Putnam on “The Collapse of Fact/Value Distinction”. Unfortunately, these developments in the philosophy of science have not yet reached the domains of data analysis, which continues to be based on positivist foundations. Rejecting positivism requires re-thinking the disciplines related to data analysis from the foundations. In this paper, we consider just one of the foundational concepts of statistics. The question we will explore is: What is the relationship between the numbers we use (the data) and external reality? The standard conception promoted in statistics is that numbers are FACTS. These are objective measures of external reality, which are the same for all observers. About these numbers there can be no dispute, as all people who go out and measure would come up with the same number. In particular, there is no element of subjectivity, and there are no value judgments, which are built into the numbers we use. Our main goal in this paper is to show that this is not true. Most of the numbers we use in statistical analysis are based on hidden value judgements as well as subjective decisions about relative important of different factors. It would be better to express these judgments openly, so that there could be discussion and debate. However, the positivist philosophy prohibits the use of values so current statistical methodology HIDES these subjective elements. As a result, students of statistics get the impression that statistical methods are entirely objective and data-based. We will show that this is not true, and explain how to uncover value judgments built into apparently objective forms of data analysis.

It is useful to understand statistics as a modern and deadly form of rhetoric. When values are hidden in numbers, it is hard for the audience to extract, analyze, discuss, and dispute them. This is why it has been correctly noted that “There are lies, damned lies, and statistics”. The most popular statistics text of the 20th century has the title “How to Lie with Statistics”. In this sequence of posts, we will analyze some aspects of how values are hidden inside apparently objective looking numbers.

Since the 2008 global financial crisis, the financial regulation scenario faces new drivers and challenges.  Bank transactions by internet and mobile banking have sharply increased. In this digital environment, new technologies – such as advanced analytics, big data, in addition to the use of robotics, artificial intelligence, new forms of encryption and biometrics – have been enabling changes in the provision of financial products and services. The current wave of financial innovations is being increasingly oriented to more friendly digital channels through apps in the context of mobile banking strategies that privilege the development of open banking and further interactions with social media

Indeed, the increasing digitalization of financial transactions is also related to changes in the banks’ competitive environment, where the intense growth of the start-ups called fintechs, especially since 2010, has revealed a new articulation between finance and technology. Such fintechs are companies organized as digital platforms with business models focused on costumer relationship in the areas of payment systems, insurance, financial consultancy and management, besides virtual coins. Among other initiatives, we can highlight the crypto fintech products, such as a consumer app for the Bakkt, the Bitcoin futures contracts exchange run by the Intercontinental Exchange.

In this digital environment, new technologies – such as advanced analytics, blockchain and big data, in addition to the use of robotics, artificial intelligence, besides new forms of encryption and biometrics – have been enabling changes in the provision of financial products and services that are challenging current central banks’ patterns of policy and regulation.

The transformations provoked by these start-ups in the financial markets have raised a relevant discussion about the impacts of recent technological innovations on the financial regulation agenda – mainly focused on the Basel Accords- and the future of the world system of currencies. The intense changes are settling new questions for regulators, such as:

  • How to regulate fintechs’ activities of financial management that collect, treat and custody information from users?
  • How to regulate the credit markets since start-ups (non-banks) are developing electronic platforms to sell loans?

Taking into account the global changes in the provision of financial products and services, central banks have closely also followed the recent expansion of crypto currencies (Tapscott and Tapscoot, 2018). Moreover, the World Economic Forum (WEF) launched a project on central bank digital currencies led by Ashley Lannquist. Over a dozen central banks, financial institutions, academics, and other international organizations have been consulted to create a WEF kit that includes worksheets, information guides, and analysis project. Indeed, central banks’ interest in state-backed digital currencies already exists, and some state-backed digital currencies exist, such as Senegal’s CFA Franc) and the Venezuelan Petro. On the horizon there are also other initiatives, like China’s own digital yuan. Moreover, the Bank of Thailand announced it had completed trials with Hong Kong for a prototype.

So far, it seems like the competition scenario of crypto currencies already includes central banks. Banks, fintechs and central banks are issuing different currencies that  this leaves the way open to a market competition process where different digital currencies would be traded at variable exchange rates. In this respect, it is interesting to remember that Hayek, in his book the Denationalisation of Money (1976) highlighted that, in the context of free competition in the currency market, the marginal costs of producing and issuing a currency would be close to zero and the nominal rate of interest would also be driven (close) to zero.

The global monetary and financial scenario is getting more complex. Considering the evolution of this free market monetary regime scenario, some questions should be addressed to students of economics:

  • Which should be the scope of the central banks and of other financial regulators when considering the growth of crypto currencies and fintechs?
  • What would be the main consequences of free competition and profit maximisation on both monetary and financial markets?
  • Which currencies (digital or not) would survive? Only those currencies that have a stable purchasing power would survive?
  • Do central banks need this policy-makers toolkit for state-backed digital currencies? Why?

Moreover, What is at stake?  In short,  central banks  turn out to be competitors. The current frontier of financialisation is leaving the way open for a global and comprehensive privatisation of money. Indeed, the conceptulaizaiton of money as a public good is being challenged.

 

 

DfbsF_aX4AAkczy

I quote a passage from Pearl: The Book of Why, which provides a gentle introduction to the newly developed field (largely by him) of causal inference via path diagrams:

In 1950, Alan Turing asked what it would mean for a computer to think like a human. He suggested a practical test, which he called “the imitation game,” but every AI researcher since then has called it the “Turing test.” For all practical purposes, a computer could be called a thinking machine if an ordinary human, communicating with the computer by typewriter, could not tell whether he was talking with a human or a computer. Turing was very confident that this was within the realm of feasibility. “I believe that in about
fifty years’ time it will be possible to program computers,” he wrote, “to make them play the imitation game so well that an average interrogator will not have more than a 70 percent chance of making the right identification after five minutes of questioning.”
Turing’s prediction was slightly off. Every year the Loebner Prize competition identifies the most humanlike “chatbot” in the world, with a gold medal and $100,000 offered to any program that succeeds in fooling all four judges into thinking it is human. As of 2015, in twenty-five years of competition, not a single program has fooled all the judges or even half of them.

Note that this is a very POSTIVIST idea — if the surface appearances match, that is all that matters. The hidden and unobservable reality – the structures within the computer – do not matter. This is like other major mistakes in the theory of knowledge made by childless philosophers, who did not experience and observe how children acquire knowledge. Turing came up with this ridiculous test and idea because he was just another childless philosopher The Book of Why continues the passage above as follows, confirming that Turing was clueless about children:

Turing didn’t just suggest the “imitation game”; he also proposed a strategy to pass it. “Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s?” he asked. If you could do that, then you could just teach it the same way you would teach a child, and presto, twenty years later (or less, given a computer’s greater speed), you would have an artificial intelligence. “Presumably the child brain is something like a notebook as one buys it from the stationer’s,” he wrote. “Rather little mechanism, and lots of blank sheets.” He was wrong about that: the child’s brain is rich in mechanisms and prestored templates.

Pearl does not reference a source for the last sentence that the child’s brain is rich in mechanisms. However, by now a lot of studies of child development show that children are born with a lot of knowledge about the world they are coming into. Childless philosophers are prone to such gross mistakes about the nature of human knowledge, and also about how human beings learn about the world world we live in. Unfortunately, their failures have had catastrophic real effects on the world we live in — see The knowledge of childless philosophers and Beyond Kant for more discussion of this.

This is the continuation of a sequence of posts on methodology of economics and econometrics (For previous posts, see: Mistaken Methodologies of Science 1Models and Realities 2, Thinking about Thinking 3, Errors of Empiricism 4, Three Types of Models 5Unrealistic Mental Models 6, The WHY of Crazy Models 7, The Knowledge of Childless Philosophers 8, Beyond Kant 9,). In this (10th) post, we consider the methodology of econometrics, which is based on Baconian or observational models. That is, econometric models tend to look only at what is available on the surface, as measured by observations, without attempting to discover the underlying reality which generates these observations. This is an over-simplified description, and we will provide some additional details about econometric methodology later.

The methodology of econometrics is rarely discussed in econometrics textbooks. Instead, students are taught how to DO econometrics in an apprentice-like fashion. All textbooks mentions the “assumptions of the regression model” in an introductory chapter. In later chapters, they proceed to do regression analysis on data, without any discussion of whether or not these assumptions hold, and what difference it could make to the data analysis. The student is taught – by example, not by words – that these assumptions do not matter, and we can always assume them to be true. In fact, as I learned later, these assumptions are all-important and these actually drive all the analysis. By ignoring them, we create the misleading impression that we are learning from the data, when in fact, the results of the analysis come out of the hidden assumptions we make about it. Once one realized this clearly, it becomes EASY to carry out a regression analysis which makes any data produce any result at all, by varying the underlying assumptions about the functional form, and the nature of the unobservable errors. This issue is discussed and explained in greater detail in my paper and video lecture on Choosing the Right Regressors.  The fundamental underlying positivist principles of econometric methodology, which are never discussed and explained, can be summarized as the following three ideas:

Discovery of Scientific Laws: The GOAL of data analysis is to find patterns in the data. By changing functional forms, and adding random errors, the range of patterns that we can find in any finite amount of data is massively expanded from what we can see in scatter plots of the data. Any pattern we can find, subject to rules students are taught, is a candidate for a new scientific law.

Verification of Scientific Laws: By running regressions until we get a good fit, we discover scientific laws. By now it is well established that some of these good fits are “spurious” – they are ‘accidental’ patterns in the observations, which do not actually have any significance, or law, which drives them. So how do we assess a potential scientific law, to see if it is valid? The standard positivist answer is “forecasting”. If a law forecasts well, then it is valid – it is being tested OUTSIDE the range of data on which it was estimated, so if it continues to hold good, this represents validity.

Explanation by Scientific Laws: It is thought that learning scientific laws deepens our understanding of reality. But there is an active quarrel among philosophers as to what it means to have deeper understanding. According to the positivists, the patterns that we see in the observations ARE the understanding. There is no more (deeper) understanding to be had. This has been formalized in the Deductive-Nomological (D-N) model of explanation. To “explain” a particular event is to show that it is a particular case of a general law. If a particular data point fits a regression (law), then it is explained by the regression.

All three of these ideas, on which modern econometric methodology is based, are challenged and contradicted by a “Realist Approach” to Econometrics.

Discovery of Unobservable Objects and Effects: The object of data analysis is NOT to find patterns (good fits, high R-squared). Rather, we look for patterns which reveal hidden, unobservable, real world objects and effects which manifest themselves in the patterns that we see. For example, we observe the pattern (opium ==> sleep): opium puts people to sleep. We ask “why” – perhaps it is some chemical contained within opium which has this property – if so, all compounds which contain the same chemical will have this property. We look at chemical constituents of opium to search for possible explanations – this is an example of going beyond the observations to search for deeper hidden causes of the patterns that we observe.

Verification by Experimentation: A large number of observed chemical phenomena could be explained by the hypothesis that chemical were composed of molecules. Different experiments designed to discover or confirm properties of these hypothesized objects produced results conforming to the presumed existence of molecules. Hypotheses about hidden objects, and causal effects, are confirmed (but not proven) by experiments or observations which are designed to highlight presence or absence of such objects and effects, by screening out elements which would interfere with detection of their presence. If the same object or effect succeeds in explaining a variety of observed phenomena, then we get strong indirect confirmation for their existence. “Prediction” or “Forecasting” on the other hand, is NOT a good method for confirming scientific laws because we live in a complex world where there are a huge number of different laws in operation at the same time. A valid law may fail to forecast well because of other factors in operation. Similarly, an invalid law may forecast well by chance. Given sufficiently large number of models, one of them will automatically hit the right forecast without being correct. Success in prediction requires that there should be only one law in operation – this is what experiments try to achieve. They do so by creating artificial environments which screen out all other effects so as to highlight the effect they are looking for. Weak effects will fail to forecast well, because they will be overwhelmed by other, stronger, factors in operation in real world situations.

Causal Explanation: Opposed to the idea of explanation by patterns, is the idea of a real explanation, which goes beyond observed patterns to seek the hidden and unobserved real causes which create this pattern. The examples given in Simpson’s Paradox can be used to provide an illustration. Suppose we observe that the admit ratio of males is higher than the admit ratio of females at Berkeley. This is, by itself, a pattern, which can be converted into a law: if a female applies to Berkeley, her chances of getting admission are lower than that of a male who applies to Berkeley. However, causal explanation requires a deeper search for reasons for this discrepancy. The obvious hypothesis which suggests itself is that Berkeley discriminates against women. We then search for additional evidence to confirm whether or not this is true. This might lead us to look at the Departmental Admissions separately for each department. Examining these ratios leads to the reverse conclusion: each department discriminates in favor of women and against men. Looking at this breakdown led Bickel, Hammel, O’Connell (1951) “Sex Bias in Graduate Admissions: Data from Berkeley to a rather different conclusion. The low admit rates for women were because more women applied to departments which were more difficult to get into. As discussed in much greater detail in 2-Simpson’s Paradox, there is a wide variety of different causal structures, which radically different implications, all of which lead to the same set of observable data. So explanation by patterns is not really possible — patterns do not have any direct meaning, and must be interpreted in the context of a causal hypothesis about underlying causes for the pattern.

For reasons discussed above, the positivist methodology for econometrics naturally leads to models which are over-fitted to the data, and routinely fail to work outside the data sets on which they were fitted. This is because econometricians look for strong fits, instead of surprising patterns which require examination and explanation. This is explained in greater depth and detail in my paper on “Methodological Mistakes and Econometric Consequences“. See video lecture on this topic below:

 

 

I became an economist by mistake. The malicious will say that you can deduce it from the quality of my writings. I like to believe in the bizarre paths of Destiny on which the flights of human liberty stumble along.

Here I would like to link my personal experience – of little interest to the reader – to the far more interesting subject of the ongoing debate in economic science. Indeed, as is well known, particularly since the crisis began in 2007, a certain disillusionment has been growing about economists’ ability to foresee the course of events. While asking economists to foresee something perhaps pushes them into the sphere of magic to which they do not belong, there is strong discontent with their ability to explain events in progress. If the beautiful and highly formal mathematical models developed over the course of decades do not serve to predict the future – and it astonishes me that someone might believe that – they lack ex-post usefulness in interpretation. In short, they are not very useful.

Here I want to focus on the moment when I understood that something was wrong with the economics I was learning as a student. I enrolled in the economics faculty of the University of Verona in 1999, after two unsuccessful years spent working on a degree in computer science (I compensated for that lack in 2012 by marrying an Indonesian girl with a computer science degree). That choice was something of a fallback, a sort of last resort that reconnected me to my high school studies in accounting. In the spring of 2000, having to choose which exam to take, I focused on “history of economic thought”, which seemed to me to be useful for other exams. That year the department chairperson was on sabbatical, and the course was taught by Professor Sergio Noto, who still works at the University. Long story short, that course – taught as it was by Prof. Noto – was the beginning of a passion; I was struck in particular by Joseph Schumpeter, the economist to whom I dedicated my best years, and who has still not abandoned me.

Noto and Schumpeter (Austrian by birth but not of that school of thought) were my keys to entering the so-called Austrian school of economics; I will return to that later.

In addition to devouring Schumpeter, I began to stock up on books by and about the Austrian school. The experience that truly and radically changed the course of my studies was reading The Economics of Time and Ignorance, by Gerald O’Driscoll e Mario J. Rizzo.  Only after many years did I discover that it was one of the foundational books of the youngest generation of Austrian economists, particularly by those who considered themselves students of Ludwig Lachmann, but at the time I was instinctively struck by it even without being able to place it within its context.

One example in particular captured me, which I have since repeated for years to my students or in my seminars. As anyone who has studied economics knows, the textbook definition of “perfect competition” is an economic system in which the number of buyers and sellers is so high that no one is able to engage in price discrimination; everyone produces the same thing with the same characteristics, and the technology is given. The authors commented on that more or less like this: “Excuse me, but a system in which no one can discriminate on prices, the products are all just alike, and there are no technological differences – isn’t that socialism? Doesn’t the word ‘competition’ suggest something more dynamic, as in sports in which someone wins by virtue of a difference, whether it be on price, quality, technology, marketing, luck, etc.?”

For me that example was an epiphany. My microeconomics textbook – basically all of them on the market – gave a definition of perfect competition that better described the exact opposite of competition (socialism). From then on I began to study more critically, and I tried to build up an alternative understanding based on the teachings of the Austrian school of economics. Moreover, that critical approach allowed me to later construct my own personal vision within the Austrian school, and today I find myself an unorthodox person within an unorthodox school.

The important lesson I drew from that epiphany was not only to more critically approach my study; above all I remain convinced of the fact that economics is useful if it helps us understand reality. Of course some level of abstraction is necessary, but not to the detriment of its explanatory power.

In short, I am convinced that the economics in vogue, which today is primarily econometrics, reasons more or less like this: let’s take reality, empty it of the human element (that is, creativity, unpredictability, and non-determinism) and the flow of time (which is what brings novelty), and let’s build very elegant formal models where everything comes out right, because what we want to explain is already included in the hypotheses of a static model.

But what can we do with an economics without time and without people – that is, without ignorance? Precisely little or nothing.

To be continued…