The experimental government: what works best?

In order to highlight the significance of a more experimental and empirical approach to public policy, UK Nudge Unit leader David Halpern suggested the notion of experimental government in 2015. The relevance of the experimental government can be noted in Halpern’s words:

Governments, public bodies and businesses regularly make changes to what they do. Sometimes these changes are very extensive, such as when welfare systems are reformed, school curricula are overhauled, or professional guidelines are changed. No doubt those behind the changes think they are for the best. But without systematic testing, this is often little more than an educated guest. To me, this preparedness to make a change affecting millions of people, without testing it is potentially far more unacceptable than the alternative of running trials that affect a small number of people before imposing the change to everyone.

The randomized controlled trials (RCTs) are at the heart of figuring out “what works best” in public policy after using tests as a routine policy and practice. As a result, while governments could gradually determine the measures that actually operate, policymakers could manage the outcomes of studies that could be deemed more or less efficient

Considering the controversies around the institutional sterility of the RTCs and the laboratory environment, one of the main questions at stake is: why we cannot transport the results of RCTs to policy contexts?

Deaton and Cartwright (2016) pointed out that there are misunderstandings around what the RCTs can really do. For them, the induction technique does not guarantee that appropriate causal factors are taken into account across sample groups in any specified RCT. Therefore, the results of the process of inference might be wrong. Indeed, the results of RCTs can be challenged ex post, after examining the composition of the control group and the factors considered in the experimental setting.

Deaton and Cartwright also dismissed the transportation of RCT results to other contexts since the causality of the results is always context-dependent. The decision-making process in experimental environments therefore relies on contextual factors that may be different elsewhere. Therefore, empirical economics does not provide a credible basis for economic theory and policy by relying on inductive investigation techniques that can never be completely transported across time and space

Moreover, economists Steven D. Levitt and John A. List (2007) highlighted that human behaviour in RCTs can be affected by the selection of the individuals, the context, the evaluation of actions by others, and ethical issues. Then, the findings in a laboratory setting may overestimate or underestimate the outcomes of real life interactions.

In other words, if an intervention “works” and makes people better off in the laboratory, there is no guarantee that this intervention will actually do so in the real-world. As a matter of fact, RCTs run the risk of considering worthless casual relationships as causalities in the attempt to theorize on economic issues. In short, without understanding why the effects work on society, the results of the RTCs cannot be transferred and the normative results of economic studies are challenged.

From a critical point of view, Michel Foucault (1981) emphasized that human beings are trapped in practices of domination that affect their subjectivities in the context of historical social relations, practices and institutions. His philosophical contribution calls for a reflection on both history and mechanisms of power to build economic theories and policies.

Indeed, in economics while considered as a social science, “what works” in the laboratory does not necessarily work in the real-world.



Deaton, A. and Cartwright, N. (2016). Understanding and misunderstanding randomized controlled trials. NBER Working Paper No. 22595.

Foucault, M. (1981). As Palavras e as Coisas (The Words and the Things). Coleção Tópicos. São Paulo: Martins Fontes.

Halpern, D. (2015). Inside the Nudge Unit: How Small Changes Can Make a Big Difference. London: WH Allen.

Levitt, S. D. and List, J. A. (2007). What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives, 21 (2): 153–174.

Madi, M.A.C (2019). The Dark side of Nudges. London: Routledge.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: