Got round to this last week. It's refreshingly short, and timely, because right now applied game theorists like me are kind of hunting for justifications for our lives.
The key ideas are, first, that models do many things beyond making predictions, such as organize data, or work out the logic of a particular world view. This is not new, but worth reiterating. Second, we are testing things wrong. Nowadays most articles with formal models test their implications (for example, a comparative static). This is a simple logical error: if model X implies prediction Y, and prediction Y is true, it does not follow that model X is true. Using probabilistic logic does not help, either. If model X implies Y with high probability, and Y is true, it does not follow that model X has high probability of being true. It does not even follow that I should put higher probability on model X than before I observed Y.
Example. There is a large animal in this box. It could be a dog, a horse, or a unicorn. The ears are sticking out of the box, and they are brown and equine. The unicorn theory scores well for this, because unicorn ears are equine. But the horse theory scores even more highly, because unicorns are white. (Right?) Formal proof at bottom.
The book's claim is that many political science articles are deducing unicorns from horse ears, without considering horses.
This would not be such a problem if we knew all the alternative theories. We could just apply Bayes' rule. Unfortunately, in the complex world of politics, the set of potential theories is unimaginably vast, and we cannot possibly think of them all. (Thinking of new theories is our prime job, after all.)
I am not sure what the solution is. My instinct is that testing many different implications of the theory is a good idea, including specific causal mechanisms. Political economics is especially bad at this: there is a tendency to test complex, highly specific theories with a single macro-economic implication (e.g., "spending on public goods is higher under proportional representation").
So this idea was new to me and inspired some thought. My one quibble with the book is that the current emphasis on model testing does not come just from an abstract philosophical debate (e.g. Green and Shapiro's book); it is also a reaction to an overemphasis on formal theorizing a few years back. The pendulum naturally swings between theory and empirics; at the moment, looking at the social sciences broadly, we have probably gone too far the other way and have too many empirical results and not enough theory connecting them. Whether the next wave of theories will be formal is an interesting question.
[Proof about the unicorn:
By Bayes' rule:
Prob(unicorn given ears) = Prob(unicorn) × Prob(ears given unicorn) / Prob(ears)
so my posterior is only higher than my prior, Prob(unicorn), if
Prob(ears given unicorn) > Prob(ears)
and we can parse out the right hand side as
Prob(ears)= Prob(unicorn) × Prob(ears given unicorn) +
Prob(horse) × Prob(ears given horse) +
Prob(dog) × Prob(ears given dog)
which can very easily be greater than Prob(ears given unicorn), if Prob(ears given horse) is high enough.]
No comments:
Post a Comment