I've bashed laboratory experiments in a few posts. Now seems a good time to bash field experiments. With that in mind, below is a scientific diagram of the modal modern empirical paper in a good economics or political science journal.
Here, X is the independent variable. c.e. is the causal effect of the independent variable on Y, the dependent variable. T is a tree.
As you can see, the causal effect is a lightning flash. It is blindingly sharp and crisp. A randomized controlled trial has been implemented at great expense; or, there is a field experiment in which different values of X are randomized across the relevant units; or, the paper argues, at length and convincingly, that the independent variable X was assigned in a way uncorrelated with any other factors affecting the dependent variable – this is a true natural experiment. Or at very least we have a great instrumental variable. The ingenuity and effort spent in getting at real causality in modern social science is inspiring, and it is a very good thing.
When we turn to X and Y, the picture is less encouraging.
X in particular is a mysterious grey cloud. What are its edges, where does it end and where begin? What is in the cloud? How does it link to other clouds, or all they all part of a single cloud system? It is hard to tell.
Y is rather the same. It's certainly something – after all, it has been measured – but it also looks rather blurry and indistinct. Is that grey bit part of it, or its shadow?
Here are some examples:
- I recently saw a presentation examining the effect of church attendance on social preferences and ingroup bias, by running experiments with people before and after church. Much could be, and was, said about the details of this design. (Maybe people after church were hungrier? Do you need a control with people who didn't go to church?) Less discussed was exactly what in church attendance was having the effects. The results were presented as "the effect of church attendance".
In economic theory, the world is traditionally composed of preferences, beliefs and choice sets. That is pretty minimal, and experimental work has shown that other things than these three affect behaviour - say, framing. But to make church attendance a primitive in our social theory seems over the top.
Of course, even if you do want to do this, and think there is such a thing as the effect of church attendance, the experiment was run in a particular year, in two cities of a particular country. - Economic history is an obvious target. A well-known paper looks at the effect of the mita institution, a form of forced labour imposed by the Spanish Empire, on contemporary Peru. The effects are identified by exogenous geographical variation, and persist to the present day in terms of underconsumption. There is no way this is not an interesting finding. But what is it a finding of? In one sense, it is the effect of the mita. But now we are back in a world of proper nouns: how do we generalize from this to make future predictions? Alternatively, we could talk about "extractive institutions", leaving us with an awkward identification problem: how do we know when an institution is extractive? (When it thwarts development?)
The same point holds for other papers. Work on the effect of the printing press seems relatively more immune to this critique, because it seems more natural to think of printing presses as in some sense a natural kind – a very specific technology, which could have been invented in other places than it did. Then there are papers on the effect of Protestantism. The effect? - True experiments with randomized treatments are not immune to the problem. An admirable political science paper played a radio soap in Rwandan villages to examine the effect of media on intergroup behaviour and attitudes (ungated psychology article on the same project). There are effects, but what is the treatment? Would it generalize to another soap? Another type of media? Another country? The author is aware of these issues.
Are structural models the answer? Well, these models explicitly, formally specify how we think our results generalize. They may be wrong, but it is better to be clearly wrong than "not even wrong".
Laboratory experiments actually come off reasonably well. The definition of laboratory is a controlled environment. So at least we know what treatment we applied and in what context. We still don't know how our Xs or Ys generalize, but the cheapness of the lab means it may be easier to explore the boundaries of these variables.
One of the questions we are facing is: do we want to do "social science"? That is, can we reduce the social world to a small set of natural kinds and forces (preferences, beliefs...) and generalize effectively from one situation to others which are "similar" in our theoretical framework. As I argued before, we have no reason to believe that lab results using the framework of traditional micro generalize in this way. But maybe an expanded framework will do better.
Alternatively, you can take the Popperian perspective. There is no grand social science, just "piecemeal social engineering". The Rwanda radio experiment, for example, doesn't prove any big claim about "the effect of media on intergroup prejudice". It just proves that it might be worthwhile funding some more radio soaps like the one the experimenters used.
This is a less appealing vision of social research for most scientists. It is about feeling our way blindly instead of seeing the light. That may be the best we can do. If so, then careful estimation of causality is worthwhile research in itself. On the other hand, if we are still ambitious to find general frameworks and ways of seeing the world, then we need to pay as much attention to "identifying" X and Y as to identifying causal effects.
Lastly, the tree T plays no part in the analysis. It is just there to add realism.
No comments:
Post a Comment