Alex Webb / Magnum Photos

Democracy of Evidence

Richard Hahn

March 27, 2024

Liberating economics from the academy

Liberating economics from the academy

Making a causal claim — the central goal of economic evaluation — means isolating the measurable effect of a certain cause by minimizing the chance of alternative explanations. Therein lies the majesty of economics. Correctly applied, economic techniques of measurement — econometrics — paint highly accurate portraits of the relationships between specific causes and effects. But the paintings are necessarily miniatures. Expanding their canvases to comprehend the breadth of human nature leaves the pictures patchy and pixelated.

Econometrics helps us get to the truths of very small matters. Any attempt to apply these truths to very large matters requires a strong draught of assumptions. But we need not drink this brew — at least not all of us.

The solution is not to abandon the economic tools that make causal research possible, but instead to put them in more hands and apply them to smaller problems — a democratization of knowledge generation that could itself be a big social change.

The academics who dominate policy research are in the business of adding to the general knowledge, and the quickest and most prestigious way to do this in the social sciences is to ask big questions about human nature. To what degree are people encouraged to take chances by ambiguous information? What is the return on welfare spending in terms of health and happiness? Do two-parent households lead to better outcomes among children? These are all questions that could lead to journal publications, invited talks and book deals.

This incentive to tell stories about human behavior is not inherently bad, and most good scholars are transparent about their assumptions and uncertainty. It causes harm only when we rely almost exclusively on academics to identify and test policy solutions. In my experience as a program evaluator and policy analyst for governments at all levels, I have seen that the econometric tools now nearly monopolized by academic social scientists might be properly liberated and shared with the very people who make and implement social policies.

Government offices, nonprofit organizations and other agents of social change make specific policies for specific populations, and they do so on a daily basis, yet they rarely measure whether their policies have the intended effects. More often, they rely on intuition or politics to guide their decisions. When they do conduct evaluations, they are cumbersome affairs involving protracted timelines and external evaluators, usually from academia. This is an unnecessary process, particularly applied to the workaday policies that directly affect most people’s lives.

Practitioner-led research is rarely perfect, or even publishable, but the very act of questioning and systematically measuring the outcomes of policies is itself a huge step forward.

Estimating the size of the effect of a specific policy on a specific population is often relatively simple. Agencies and organizations collect all sorts of data on the people they serve and manage. All it takes to get an accurate estimate — that is, to adequately eliminate the chance that any change in the data is due to a cause other than the policy in question — is to randomly assign some people to conditions specified by the policy while leaving the rest with the status quo. When random assignment would be unethical or politically infeasible, other methods often exist to mimic random assignment. Some policies are too big, too vague or otherwise inappropriate for this sort of measurement, but most are not.

Willingness to phase in policies gradually, a little planning ahead, careful supervision of how agents implement the policies and accurate data collection are the hard parts. Next to these, the calculations necessary to measure changes in the data are relatively simple. Many agencies and other public interest entities lack the inclination or leadership to do this work, but to assume they lack the ability to use experimental tools is preposterous. After all, few policies are made in haste, most public organizations have hierarchical structures and can easily capture data, and all employ people who can do basic math.

In some ways, the people on the ground are actually at an advantage. Academics who evaluate policies from the outside have to make strong assumptions about how policies are implemented and how data are collected, but practitioners don’t have to make the same leaps, because they are doing the work. Practitioner-led research is rarely perfect, or even publishable, but the very act of questioning and systematically measuring the outcomes of policies is itself a huge step forward.

Megan Stevenson compares social scientists to engineers who attempt to manipulate social processes from on high, but the tools of econometric research are more akin to those of a carpenter than those of an engineer. While the engineer has to plan complex processes in minute detail, a carpenter merely measures and responds to the needs of the moment, one room at a time. Government agencies and other social institutions are like carpenters. They rarely have the mandate or the resources to make sweeping structural changes to how we manage social problems. Instead, they work incrementally, trying to shore up one side, then another.

My main critique of Stevenson’s argument is that, in cataloging the ways in which causal science fails to provoke or measure big changes in how society behaves, she overlooks perhaps the most important potential social change of them all: a growing curiosity among institutions that work toward the public good about the effectiveness of their own operations. Stevenson’s paper should not dampen that flame. Good public servants will continue to light their torches from it.