Cause and defect
From The Economist print edition
Instrumental variables help to isolate causal relationships. But they can be taken too far
| |
“LIKE elaborately plumed birds…we preen and strut and display our t-values.” That was Edward Leamer’s uncharitable description of his profession in 1983. Mr Leamer, an economist at the University of California in Los Angeles, was frustrated by empirical economists’ emphasis on measures of correlation over underlying questions of cause and effect, such as whether people who spend more years in school go on to earn more in later life. Hardly anyone, he wrote gloomily, “takes anyone else’s data analyses seriously”. To make his point, Mr Leamer showed how different (but apparently reasonable) choices about which variables to include in an analysis of the effect of capital punishment on murder rates could lead to the conclusion that the death penalty led to more murders, fewer murders, or had no effect at all.
In the years since, economists have focused much more explicitly on improving the analysis of cause and effect, giving rise to what Guido Imbens of Harvard University calls “the causal literature”. The techniques at the heart of this literature—in particular, the use of so-called “instrumental variables”—have yielded insights into everything from the link between abortion and crime to the economic return from education. But these methods are themselves now coming under attack.
Instrumental variables have become popular in part because they allow economists to deal with one of the main obstacles to the accurate estimation of causal effects—the impossibility of controlling for every last influence. Mr Leamer’s work on capital punishment demonstrated that the choice of controls matters hugely. Putting too many variables into a model ends up degrading the results. Worst of all, some relevant variables may simply not be observable. For example, the time someone stays in school is probably influenced by his innate scholastic ability, but this is very hard to measure. Leaving such variables out can easily lead econometricians astray. What is more, the direction of causation is not always clear. Working out whether deploying more policemen reduces crime, for example, is confused by the fact that more policemen are allocated to areas with higher crime rates.
Instrumental variables are helpful in all these situations. Often derived from a quirk in the environment or in public policy, they affect the outcome (a person’s earnings, say, to return to the original example) only through their influence on the input variable (in this case, the number of years of schooling) while at the same time being uncorrelated with what is left out (scholastic ability). The job of instrumental variables is to ensure that the omission of factors from an analysis—in this example, the impact of scholastic ability on the amount of schooling—does not end up producing inaccurate results.
In an influential early example of this sort of study, Joshua Angrist of the Massachusetts Institute of Technology (MIT) and Alan Krueger of Princeton University used America’s education laws to create an instrumental variable based on years of schooling. These laws mean that children born earlier in the year are older when they start school than those born later in the year, which means they have received less schooling by the time they reach the legal leaving-age. Since a child’s birth date is unrelated to intrinsic ability, it is a good instrument for teasing out schooling’s true effect on wages. Over time, uses of such instrumental variables have become a standard part of economists’ set of tools. Freakonomics, the 2005 bestseller by Steven Levitt and Stephen Dubner, provides a popular treatment of many of the techniques. Mr Levitt’s analysis of crime during American election cycles, when police numbers rise for reasons unconnected to crime rates, is a celebrated example of an instrumental variable.
Two recent papers—one by James Heckman of Chicago University and Sergio Urzua of Northwestern University, and another by Angus Deaton of Princeton—are sharply critical of this approach. The authors argue that the causal effects that instrumental strategies identify are uninteresting because such techniques often give answers to narrow questions. The results from the quarter-of-birth study, for example, do not say much about the returns from education for college graduates, whose choices were unlikely to have been affected by when they were legally eligible to drop out of school. According to Mr Deaton, using such instruments to estimate causal parameters is like choosing to let light “fall where it may, and then proclaim[ing] that whatever it illuminates is what we were looking for all along.”
This is too harsh. It is no doubt possible to use instrumental variables to estimate effects on uninteresting subgroups of the population. But the quarter-of-birth study, for example, shone light on something that was both interesting and significant. The instrumental variable in this instance allows a clear, credible estimate of the return from extra schooling for those most inclined to drop out from school early. These are precisely the people whom a policy that sought to prolong the amount of education would target. Proponents of instrumental variables also argue that accurate answers to narrower questions are more useful than unreliable answers to wider questions.
A more legitimate fear is that important questions for which no good instrumental variables can be found are getting short shrift because of economists’ obsession with solving statistical problems. Mr Deaton says that instrumental variables encourage economists to avoid “thinking about how and why things work”. Striking a balance between accuracy of result and importance of issue is tricky. If economists end up going too far in emphasising accuracy, they may succeed in taking “the con out of econometrics”, as Mr Leamer urged them to—only to leave more pressing questions on the shelf.
No comments:
Post a Comment