What works? Policy design without theory is useless

Against a rationalist top down approach to policy making, the evidence-informed policy and practice has rapidly evolved in the last two decades.

In this line of research, a new book What Works Now? Evidence-informed Policy and Practice has been edited by Annette Boaz, Huw Davies, Alec Fraser and Sandra Nutley.  It offers not only a synthesis of the role of evidence in policy making but also an analysis of its use in recent economic models and practices in the UK, Australia, New Zealand, Scandinavia, Canada and the United States. In addition to the diversity of policy and practice settings where evidence is sought and gets applied, the book considers policy examples related to healthcare, social care, criminal justice, education, environment and international development.. At the core of the argument regarding the actual relevance of ‘know-about’, ‘know-what works’, ‘know-how’, ‘know-who’ and ‘know-why’ is the belief that evidence matters.

Considering this policy scenario, the relevant question at stake is  what are the implications of the new policy design practices that mainly rely on the belief that evidence matters?

What is important to note is that behind the belief that evidence matters is a deep transformation of the public policy approach  towards a more experimental and empirical one.

At this respect, in the UK, the Nudge Unit leader David Halpern recently suggested the conceptualization of experimental government  in order to characterize the new approach to policy making based on evidence. The relevance of the potential outcomes of  systematic testing is clear in Halpern’s words:

Governments, public bodies and businesses regularly make changes to what they do. Sometimes these changes are very extensive, such as when welfare systems are reformed, school curricula are overhauled, or professional guidelines are changed. No doubt those behind the changes think they are for the best. But without systematic testing, this is often little more than an educated guest. To me, this preparedness to make a change affecting millions of people, without testing it is potentially far more unacceptable than the alternative of running trials that affect a small number of people before imposing the change to everyone. 

At the heart of his proposal about “what works best” in public policy is the use of evidence as a regular practice to select the measures that actually operate in a more efficient  way.  Moreover, no  ethical considerations about the efficient methods and goals in policy making are added to his explanation.

Taking into account the methodologies that support some policy practices that favour inductive reasoning and randomized control trials of impact evaluation (RCTs), there is a controversy around the utilization of these attempts to build experimental programmes or policy intervention. For instance, Deaton and Cartwright (2016) pointed out that there are misunderstandings around what the RCTs can really do. For them, the inductive techniques used in research do not guarantee that the relevant causal factors are taken into account across sample groups in any specified RCT. Therefore, the results of the inference pocess might be wrong. Indeed, the outcomes of RCTs can be challenged ex post, after examining the composition of the control group and the factors considered in the experimental setting. Moreover, Deaton and Cartwright also rejected the transportation of the outcoems of RCTs to other contexts since the relations of causality between variables is always context-dependent.

As the decision-making policy process in the real world relies on institutional factors that may be different elsewhere, the methodology based on RCTs does not provide a credible basis for policy making. In short, the outcomes of inductive investigation can never be completely transported across time and space.

Moreover, economists Steven D. Levitt and John A. List (2007) highlighted that human behaviour in RCTs can be affected by the selection of the individuals, the evaluation of their actions by others, and ethical issues. Then, the findings in a laboratory setting may overestimate or underestimate the effectiveness of policy interventions within real life interactions. In other words, if a policy intervention “works” and makes people better off in a laboratory, there is no guarantee that this intervention may actually do so in the real-world.

In fact, the methodology of RCTs runs the risk of considering worthless casual relationships as relevant causalities in the attempt to develop policy recommendations. In short, the use of the outcomes of RCTs as normative orientations for policy making should be put in question.

“What works” in the “sterile” environment of a laboratory does not necessarily work in a real-world where social interactions and the dynamics of institutions are overwhelmed by power relations. Therefore, ethical considerations should be considered in any attemp to  build policy proposals.

Indeed, the transformation of the economic policy approach has evidently been a remarkable one. It is worth recalling the words of Lars Syll about the current sad state of economics as a science,

A science that doesn’t self-reflect and asks important methodological and science-theoretical questions about the own activity, is a science in dire straits. The main reason why mainstream economics has increasingly become more and more useless as a public policy instrument is to be found in its perverted view on the value of methodology.

 

 

References
Boaz, A, Davies, H., Fraser, A and Nutley, S. (eds) What Works Now? Evidence-informed Policy and Practice.. Policy Press. 2019.,
Deaton, A. and Cartwright, N. (2016). Understanding and misunderstanding randomized controlled trials. NBER Working Paper No. 22595.
Halpern, D. (2015). Inside the Nudge Unit: How Small Changes Can Make a Big Difference. London: WH Allen.
Levitt, S. D. and List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21 (2): 153–174.
Madi, M.A.C (2019). The Dark side of Nudges. London: Routledge.
Sill, L. (2019=. Economics becomes more precise and rigorous — and totally useless
April 4. https://rwer.wordpress.com/2019/04/04/economics-becomes-more-precise-and-rigorous-and-totally-useless/

12 thoughts on “What works? Policy design without theory is useless

  1. My want to consider Systems Dynamic modeling. One can examine an infinite combinations of what-ifs and feedbacks in terms of human behavior and decision making. Works extremely well in predicting impact of various monetary systems on economic stability.

    1. Hi Paul, Thanks for your comment. I am concerned with the lack of a realistic theory that can support the evidence-based predictions. Which is the theoretical background of the what-ifs? Are there infinite combinations?

      Maria

  2. Maria, excellent summary of the situation we face. But it’s my view the focus of your post, and particularly the title misses the main issue at hand. Let me give you some examples from one of my areas of work, anthropology. Participatory rural appraisal (PRA) is one of a family of participatory research methods widely used to plan or assess development projects and programs. PRA is an important part of the tool kit of the contemporary development professional that enables “rural people to share, enhance, and analyze their knowledge of life and conditions, to plan, and to act” (Chambers 1994. 953). Through PRA and with the cooperation of community members, development professionals can discover and document local conditions that are relevant to planning programs and projects that are culturally appropriate and consistent with local needs and priorities. As a method PRA is consistent with the historic pattern of anthropological research practice. Many anthropologists have contributed to the development of the method. PRA utilizes many data collection and documentation techniques within the context of a strong commitment to participation. The specialized techniques performed by community members include mapping the institutions or physical setting of a community and constructing seasonal calendars, health and wealth rankings, and rating community preferences (Rietbergen-McCracken and Narayan 1998). These data collection practices are richly innovative and useful in many contexts. PRA can be used in a wide range of sectors such as natural resource utilization and management, poverty assessment, agriculture, health and nutrition, urban needs assessment, and food security (Chambers 1994). PRA and similar approaches can also be utilized in building projects, decision making about community design, education projects, etc. In this context, I’ll go one step further than your assessment of inductive reasoning and randomized control trials of impact evaluation (RCTs). We must reject any approach that is led by or defined by “scientific” researchers, in or out of the laboratory, seeing people in the community as only informants and the scientists with the their “superior” methods and theories as the sources of the evidence, as well as its interpretation and application. Only through mutual and equal co-participation of scientists and people rooted in the community can we plan, develop, set in motion, and assess projects and programs for that community. And lest there be a misunderstanding the community involved can be a small or large town, region, entire nation, a business corporation, ecclesiastical organization, government, etc. Here scientific theories are superfluous. The community provides the initial theories of the communities’ needs and desires, and culturally acceptable paths available. These may change during the planning process, but only via partnership. Anthropologists and other scientists plan and design projects within these parameters, which are put in motion and assessed by community members and scientists together. This approach is sometimes hard on the egos of scientists, but the results have been positive, in terms of the needs and cultural foundations of the community.

    1. Dear Ken,

      We certainly need theories to think about the reality and the process of change, Indeed, realistic scientific beliefs should be built considering everyday life experiences in a historical and institutional perspective

      That is why my concern relies on the role of the “laboratory experiments” that have been spread in policy making (mainly associated to behavioural economics). And with the rationalist top down approach that reinforces the disembedness of policy making.

      Many thanks for your interesting comment and clarification on what is relevant to further develop my ideas considering the partnership between community members and scientists.

      Maria

      PS. Thanks for the reading suggestions

      1. Maria, your job as a social scientist is to study and describe humans’ beliefs and actions and the theories these humans make-up to explain their beliefs and actions. It’s not your job as a social scientist to replace peoples’ theories with your own any more than it’s your job to replace their beliefs and actions with your own. Sure, mull over some hypotheses and possible scenarios for doing this job. More than this and you’re failing your obligations as a social scientist. Stimulating talk.

  3. Sure! I agree.

    Other stimulating topic: the relationship between theory and ideology

    Thanks for sharing your ideas on this blog

    Maria

  4. MMT is a reality based system. It has plenty of evidence that it responds rationally to examination. It’s ready made to be accepted and to work vastly better than the fairy tales of classical economics. Vastly improved results will accrue instead of the evils of neo liberal even predatory capitalism that holds sway today.

    1. John, “MMT is a reality based system.” Which reality? It sounds something like the reality of oil field workers I studied 4 years ago. But nothing like the reality of the south Texas banks that financed those oil fields.

Leave a reply to Maria Alejandra Madi Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.