Showing posts with label experiment. Show all posts
Showing posts with label experiment. Show all posts
Tuesday, 22 September 2015
Experimental economics after the demolition job on theory
The front cover of Richard Thaler's autobiography Misbehaving shows a single bird, looking quizzically at a huge flock of birds who are all flying in the same direction. It's a metaphor for behavioural economics. The single bird is the lone behaviourist; the flock is the economic theorists who all think the same way. And indeed, the book is a story of conflicts in which standard economic theory and presuppositions turned out to be wrong.
Economic experiments started as critique. A typical experiment takes real humans and puts them into a situation designed to be as much as possible like an economic model. Payoffs are well-defined. Subjects know all about the experiment in advance. Play is anonymous - so there's no chance to "change the game" by, say, taking your partner out for a pint afterwards. None of this is anything like the real world. But, if economic theory doesn't even get it right in these situations, what hope does it have in the real world?
I happen to think that the critique has been successful. Sensible economists no longer take (e.g.) rational maximization of monetary payoffs, or Bayesian updating, as Gospel which somehow must be true. They accept that other things might happen.
Having performed this negative task, experimentalists have gone further. They want to build their own "behavioural" theories of how people do act. But now there is a problem.
Standard theory is a function from game forms to behaviour. Given such-and-such a set of players, payoffs, information sets and action spaces, Nash equilibrium predicts such and such behaviour from each player. If standard theory is right, then people in a lab experiment which implements a "game" in this way will behave exactly as the theory predicts. They don't, so standard theory is falsified. Fine.
But it does not follow that there is another theory, which also takes games forms as the input, and which will always correctly spit out people's behaviour. Indeed, we know that this does not work. For example, people behave very differently if you call a public goods game "the community game" or "the Wall Street game". But the game-theoretic description is unaltered by this label. There is no correct theory of behaviour which depends only on game forms.
Unfortunately, lab experimenters have blithely continued to write down game forms that correspond to real world situations; implement them in the lab; observe behaviour, perhaps building a model to predict it; and assume that this is what will happen in the real world. There is now a huge accumulation of such experiments. You can see them at any experimental conference.
There is no reason to believe this approach will work. Other things than actions, information and incentives matter. So mimicking actions, information and incentives in the lab does not guarantee people will behave the same way as in the real world.
This whole research programme has to stop. Then we can start designing experiments that will tell us about the real world. In particular, a first step is to think more carefully about the theory that makes any given laboratory experiment informative about the social phenomenon of interest. Such a theory is always needed. Psychology can be helpful here, because it opens the black box of the mind and tries to find how the subject perceives a given situation.
But my main point is: stop the mindless empiricism in economic lab experiments.
Thursday, 14 May 2015
Rules for writing software to run experiments
Most economic lab experiments are programmed in zTree, a language/server designed for the purpose by the formidable Urs Fischbacher. But zTree requires a special client, so it can't run over the web, and in other ways it is showing its age a bit. As a result, many people are trying to write new software for running experiments, both in the lab and on the web.
This list of rules comes from my experience writing betr, the behavioural economics toolkit for R. If you know R and want to try betr out, there is an introduction here. Some of these rules are things I got right... others I have learned by bitter experience.
Allow your users to write experiments in a natural idiom
Here is how I think about an ultimatum game.
1 When all subjects are ready in front of their computers,
2 the instructions are read out.
3 Then subjects are divided randomly into groups of 2.
4 In each group:
5 One person is randomly chosen to be the proposer. The other is the responder.
6 The proposer chooses an offer x between 0 and 10 pounds.
7 The responder sees this offer x and accepts or rejects it.
8 When all groups are done, profit is calculated (0 if offer was rejected; otherwise responder x, proposer 10 minus x)
9 All subjects are shown their profit.
10 After a questionnaire and payment, subjects leave.
This is a clear sequence of instructions. Some are performed in parallel for different groups of subjects. There are places where people have to wait (When all subjects are ready... When all groups are done...).
When I program an ultimatum game, I want to write something like this. If you ask me to write a set of web pages, or create an experiment by object-oriented inheritance (you wot?) then it will be hard for me to understand how my program relates to my design.
This is the most important point. If you haven't done this, you haven't helped me – after all, given enough expertise I could write my program in any language out there. Make it easy for the experimenter!
zTree gets this right ✔: experiments are sets of stages which run programs before displaying some forms to the user. betr gets it fairly right too, I think ✔.
Make the server persistent. Don't use a scripting language that exits after each request
Lots of people think "Oh, an experiment is just a website! I'll write it in PHP." This is a big mistake.
An experiment is not a collection of web pages which subjects can visit at will. It is a defined sequence of events, controlled by the experimenter, which the subjects must go through.
A typical setup for a website is: a web server such as Apache accepts HTTP requests from the clients. It responds to each request by starting a script, which gets info from the client - e.g. a form submission - passes some HTML back to the server, and exits. The server passes the HTML back to the client.
Doing things this way is fine for most websites, but it will cause you two problems.
First, all of your files will start like this (pseudocode):
// PAGE TWOEt cetera. When you have many stages in your experiment, this rapidly becomes an unreadable mess.
// ok where are we?
s = get_session_from_cookie()
// have we done page 1 yet?
if (s.page_one_complete == FALSE) redirect_to("pageone")
// great! let's go... wait, what if they've used the back button from page 3?
if (s.page_three_complete == TRUE) {
error = "You mustn't use the back button, you cheater!"
redirect_to("pagethree")
}
// ready now... wait, what if the other subject hasn't finished?
other_subject = s.other_subject()
if (other_subject.page_one_complete == FALSE) {
come_back_to = "pagetwo"
redirect_to("waiting_page")
}
This problem can be mitigated by using a nice library. Unfortunately, the next problem is much nastier. Here's some pseudocode for a Dutch auction. The price goes down and the first person to click "buy" gets the object.
price = starting_price - (now() - start_time) * stepLooks fine?
if (user_clicked_buy()) {
if (object_bought == FALSE) {
object_bought = TRUE;
user.profit = object_value - price;
}
}
HOOONK. Race condition.
A race condition happens when two different bits of code execute simultaneously. Sometimes two users will click buy at almost the same time. These scripts will then execute in parallel. The first one will get to object_bought and check that it is FALSE. hortly afterwards, the second one will get to object_bought and it will still be FALSE. Then, the first script will buy the object, setting object_bought to TRUE, but too late, the second script has bought it as well!
Now you have two users who bought the object; the rest of your code, which assumes only one user has bought the object, is broken; you're going to have to pay twice; and you've misled your participants – bye bye, publication.
The good news: this will happen only rarely. The bad news: rarely enough that you never catch it in testing.
You want a single server process that decides where your subjects are and what they can see, and that is running from the start to the end of the session.
zTree and betr both get this right. ✔
Use the web browser as a client
I think this is a no-brainer. If you are running online experiments, it's the obvious choice – every device has a web browser. Even if you are in the lab, you can still run web browsers on the subject computers.
Web browsers display HTML. HTML is incredibly powerful and versatile. Video chat. UI toolkits. Angry Birds. You get all these possibilities for free, often in easy-to-use components that can be copied and pasted. You can even build HTML pages with a GUI.
Most experiments are simple: some instructions, an input form, a result. But sometimes experimenters want to do more, and if you are designing for participants recruited from the web, you want your UI to be as easy as possible because, unlike in the lab, you do not have a captive audience. So, make the power of HTML available. Of course, it should also be easy to whip up a quick questionnaire or form without knowing HTML.
betr ✔ zTree ✘.
Don't use HTTP for communication
Web browsers display pages using HTML, the HyperText Markup Language. They communicate with web servers using HTTP, the HyperText Transfer Protocol. You should not do the same, however!
The reason is: HTTP is driven by requests from the client, to which the server responds. But often, experiments need to be pushed forward from the server. For example, think of a public goods game with a group size of four. Until there are four people, subjects see a waiting page. When the fourth subject joins, you want all four subjects move forward into the first round. The server must push clients 1, 2 and 3 forward.
If you try to do this with HTTP, you will have some sort of polling where clients regularly send requests to the server, which responds "keep waiting" until it is ready. This is a horrible kludge. You are reliant on your clients to keep polling (but maybe some guy closed the browser window and you don't know that!) You have to manually work out when everyone has moved on, which means keeping track of state. Argh.
Another example: suppose you have a market. The server takes bids and offers and calculates prices. Whenever the price changes, you want all clients to update. With http, this is going to be impossible.
Luckily, modern browsers have a new technology called websockets which allow for two way communication between client and server. This is probably the way to go. Load a basic HTML page which connects to your server using a websocket. Then send new HTML pages and other updates via the websocket.
zTree obviously gets this right by not using http ✔ . Sadly, betr gets it wrong ✘. Oliver Kirchcamp pointed this out to me. I'm thinking about how to fix it in future versions.
Make testing and debugging easy
Testing and debugging zTree is not much fun. It involves manually opening one window for each client, and running through the experiment. Then doing it again. And again. And again.
But testing is really important. If you screw up in a real session, you instantly waste maybe £500 in experimental payouts. Worse still, if your experiment didn't do what you said it was going to, your subjects may not believe experimental instructions in future.
And, you typically need to test a lot, because you want to test distributions of outcomes. For example, suppose in your design, subjects are randomly rematched. You want to make sure that subjects are never rematched with the same partners. This requires testing not once, but many times!
Failure to test leads to problems. Here is a zTree code snippet that is around on the internet, e.g. here. It lets you pick a random number uniformly between 1 and max.
rand = round(random()*(max-1), 1) + 1
A good way to test is to create "robots" that can play your experiment automatically. This requires some clever design. Nice features would include
- printing out the HTML pages sent to subject
- being able to mix robots and real users (so you can change one users' actions while keeping others the same)
- create robots easily from records of previous play
betr gets a partial ✔. There is a nice replay() function which can do a lot of this stuff. zTree ✘.
Experiment sessions should be idempotent
This is related to the previous point. It also helps for crash recovery. Idempotency means that if it gets the same inputs, an experiment should produce the same results. Wait, don't we want to do a lot of randomization? Well, yes, but we also want to be able to recover from crashes, and to replay experiments exactly, by replaying commands from the clients. Among other things, this means that everything your clients do should be stored on disk so you can replay it. You should also take care to store the seeds used for random numbers.
Both betr ✔ and zTree ✔ encourage this. zTree gets a bigger tick because (I believe) it enforces it.
Let experiments be written in an existing language, don't create your own
This is the same principle as "use the web browser as a client". It's about giving your experimenters tools. zTree can do a lot, but you can't (for example) define your own functions. So, if you want to repeat a bit of code... you have to copy and paste it. A good general purpose language gives the user access to many many libraries that do useful things.
Here betr ✔ wins against zTree ✘. On the face of it, R is a strange choice of language to write a web platform in! But it has a lot of power, and many academics use it already. Python would be another good choice. (PHP would not.)
It must be easy to share data with clients
An awesome feature of zTree is that if you write a piece of user interface code which uses a variable X, and X changes on the server, that is automatically reflected on all clients' screens. Doing this is hard, but very useful: think about displaying prices in a market experiment.
zTree ✔ betr ✘.
There are many alternatives to zTree out there. (Some of the most interesting: oTree, moblab, sophie, boxs). I expect that soon, some of them will start to get traction and be used more widely. I look forward to a wider choice of powerful, easy software to run experiments!
Monday, 27 April 2015
A field experiment at LHR
At Heathrow Terminal 2, there is an escalator and a lift to take you from the Underground up to departures. The authorities have put a sign up:
TIME ON ESCALATOR 3 MINUTES
TIME ON LIFT 58 SECONDS
TAKE THE QUICK ROUTE – TAKE THE LIFT
Why has this happened?
- Standard economics: the sign was a mistake. People already choose the optimal route. (Public choice theory: the sign was not a mistake but a deliberate conspiracy by the elevator company to wear out the lift and make money from replacements.)
- Social preferences: the sign is a nudge to counter travellers' "lift aversion".
- Social norms: there is a norm of taking the escalator. People really want to take the lift, but they are afraid what others will think of them.
- Social heuristics: people mistakenly assume the escalator is faster, as it usually is in their experience. The sign corrects this.
I first read the term "social heuristics" in this paper.
Sunday, 26 April 2015
An experiment on overconfidence
A nice experiment was presented at NIBS last week – Zahra Murad, "Confidence Snowballing in Tournaments". (No paper available yet.)
The talk started from a psychological idea: people are sometimes overconfident about their own performance, and get more overconfident after a few successes. This might explain, say, the overweening confidence of CEOs and "Masters of the Universe" bankers.
In the experiment, subjects competed, in pairs, on a task which could be either easy or difficult. Then winners were matched with other winners and played again – like a football tournament. (Losers were matched with other losers.) The winners of the second round played each other again, and so on.
Before each round, subjects had to bet on their own performance. Fro this, we can learn what chance they gave themselves of winning the round.
The beauty of this design is that having won a round tells you nothing about your chance of winning the next one, because you will be playing someone else who has also just won! So, a reasonable person would not get more confident after winning a round.*
In fact, on easy tasks, winners did get more confident. On difficult tasks, losers got less confident, also wrongly and for much the same reason.
What's good about this experiment?
- It is real "behavioural economics"
- It uses the lab to create an elegant simplification
In experiments, just as in formal models, "it's realistic" is a terrible reason to add a feature. These guys put in only what was needed.
- It makes a nice parable...
Greek parables, like that of Icarus flying too close to the sun, have survived the centuries because they tell us about recurring patterns, helping us to recognize them in life and history. The Icarus myth's pattern is: hubris, insane arrogance, leads to nemesis, divine revenge. Hubris and nemesis are still all around us (hullo neo-cons! hullo Eurozone!) You would not expect them always to happen – imagine a foolish political scientist estimating the per cent prevalence of hubris in international relations – but it is useful to know that they can.
Experiments can be modern parables. Zimbardo's prison guards and Milgram's torturers have entered the folklore. They don't always apply, but they are things that can happen. (Hence "existence proof": an experiment shows, irrespective of external validity, that something has happened at least once.)
- ... and fleshes it out
I see a lot of experiments and think either "this was obvious", or "I don't know what behaviour here tells us about the real world". This work passes these hurdles.
* Nerd note: there could be some set of priors for which a Bayesian updater would get more confident. So, increasing confidence is not necessarily irrational in the technical sense, just "unreasonable" in a common-sense way.
Friday, 10 April 2015
I hate your stupid paper: Al Roth
Came across this gem while I was doing the prediction market experiment for replications - a cool idea by the way.
Organ allocation policy and the decision to donate
Abstract
Organ donations from deceased donors provide the majority of transplanted organs in the United States, and one deceased donor can save numerous lives by providing multiple organs.... We study in the laboratory an experimental game modeled on the decision to register as an organ donor and investigate how changes in the management of organ waiting lists might impact donations.
From the paper:
This paper investigates incentives to donate by means of an experimental game that models the decision to register as an organ donor. The main manipulation is the introduction of a priority rule, inspired by the Singapore and Israeli legislation, that assigns available organs first to those who had also registered to be organ donors. ...
Results from our laboratory study suggest that providing priority on waiting lists for registered donors has a significant positive impact on donation. ...
The instructions to subjects were stated in abstract terms, not in terms of organs. Subjects started each round with one “A unit” (which can be thought of as a brain) and two “B units” (representing kidneys). ...
Whenever a subject’s A unit failed, he lost $1 and the round ended for him (representing brain death)...At this point, I wished fervently for my A unit to fail, representing brain death.
For any non-specialists out there who don't see the problem... fuck it: for the tiny proportion of non-specialists who aren't already laughing at us like baboons.
Organ donation is a complex and unique decision. It involves the choice to have part of your own body cut out, when you die, in the hope of saving someone else's life.
Now it is perfectly reasonable, though counter-intuitive, to model this as just another cost-benefit decision (perhaps including some "altruistic utility"). The sainted Gary Becker did this for crime and the family - both areas not previously thought of as amenable to cost-benefit analysis - and spawned two whole new fields.
And it is also perfectly reasonable to say "No! Organ donation is different. Cost-benefit analysis just won't apply. I don't trust this economic model."
Here's what is not reasonable: to distrust the economic model; and to try to learn what will really happen, by running a laboratory experiment ... which implements the economic model.
Analogy: suppose I have a simple billiard-ball theory of planetary motion. To predict how planets interact, I build a big billiards table with a lot of billiard balls on strings representing the sun, the earth, Mars and so on. I spin the balls, take measurements and write down my predictions. Now you decide my theory is all wrong. In fact, it doesn't even work for the billiard table! You whack the red ball round on its string: it ends up totally not where my theory predicts! Falsification! Karl Popper's ghost applauds.
"Yes," you tell me, "and now just measure the position of that red ball. I want to know where Mars will be next week."
You see the problem? My billiard-ball theory is wrong. But that theory gave the only reason to think that the billiard table could predict the planets. Without the theory, what are we left with? That's right, Perky: balls. A load of useless balls.
Now there are many lab experiments on decision-making that would be relevant to organ donation. We can test theoretical models of, say, altruism and upstream reciprocity. Then, if we reckoned that the theory had captured all the relevant aspects of behaviour, we could apply it to organ donation; make some predictions; maybe try out a policy experiment. The social science lab is useful for this, because you can get "altruism" and "reprocity" into the lab in a meaningful way. But there is no meaningful way to get "organ donation" into the lab, short of a supply of Romanian orphans and a surprisingly relaxed ethics committee. Just having options with analogous payoffs does not cut it.
The authors of course know this. From the conclusion:
And perhaps the paper's results can in fact tell us something deep about how institutions can tap upstream reciprocity - but that's not what they talk about. Nor do they deal with this head on. (For example, by adding: "It follows that this very interesting experiment tells us nothing about actual organ donation. We were kidding about the title!") Instead, the introduction uses that weasel word, "suggest".Care must always be taken in extrapolating experimental results to complex envi- ronments outside the lab, and caution is particularly called for when the lab setting abstracts away from important but intangible issues, as we do here.
Roll up folks, for the new experimental methodology! Finally, unbiased causal identification in the social sciences! Drumroll. Spotlight. "Results suggest..." Parturient montes, nascetur ridiculus mus.* If I want suggestiveness, I'll read ethnography.
Here is why this gets my goat. A graduate student once proposed an experiment on global warming. The next century would be a game with 100 rounds. In each round there was a small chance of a "climate catastrophe" if the players didn't implement "mitigation". Mitigation cost a few cents, climate catastrophe cost about twenty Euros. From this experiment it was hoped to make behavioural predictions about, uuuuh, the future of the planet. Under different policy regimes.
(And - quickly, in one breath - because it was in the lab, the policy regimes were randomly and exogenously assigned. Yeah, thank God there's no endogeneity! That was such a problem with STUDYING THE REAL WORLD.**)
So I stuck my hand up and said that this was nuts. But now, some other young researcher, planning such an absurdity, can say: "Well, Al Roth did it for brain transplants!"
Seriously, how the fuck can people write this shit with a straight face?[S]ubjects started each round with one “A unit” (which can be thought of as a brain) ...
* Translated from the Latin, this means "Fuck you and Google it yourself."
** As our authors put it:
** As our authors put it:
The difficulty of performing comparable experiments or comparisons outside of the lab, however, makes it sensible to look to simple experiments to generate hypotheses about organ donation policies.
Tuesday, 9 April 2013
Good experimental designs
Habyarimana, Humphreys, Posner and
Weinstein wrote a great article with the title “Why Does EthnicDiversity Undermine Public Goods Provision?” which they turned into
a great book Coethnicity. The research they report was a set
of experiments in a slum of Kampala in Uganda.
The standard way experimentalists investigate public goods – say, schooling or sanitation – is with, guess what, a public goods game. A public goods game goes like this:
there are four of you, and you each have, say, £10. You can each put
some or all of your money into a common pot. Money in the pot is
multiplied by 1.5 and then shared out equally. Selfish people
wouldn't put money in the pot, but if everyone does so, then you all
do better. This is a bare bones representation of a public good. Why do experimenters use this paradigm? Well, it's obvious. We're interested
in public goods, and a public goods game is like a miniature public
good.
One of the surprising things about HHPW's design is this. They get their subjects to play dictator games
(where one person chooses how much to give to another), games where
they have to find another person in the slum, and experiments where
they must work with other subjects to solve a puzzle. But they never
actually implement a public goods game. Instead, they use a
set of simple experiments, not in the context of public goods, to
investigate specific psychological and social mechanisms that might
lead to underprovision of public goods. For example, are people more
altruistic to their coethnics? The Dictator Game will tell us. Or, do
people find it easier to communicate with coethnics? Try the
puzzle-solving activity.
I think this makes HHPW an example of
good experimental research design. They have thought hard about the
link between experiment and explanandum. Not that public goods games
cannot be useful, but we should not just reach in our designs for
things that “look like” or “represent” what we are
investigating. Instead, the link must always be based in theory.
Saturday, 2 March 2013
Guaranteed entertainment
New paper from Christian Grose, Neil Malhotra and Robert van Houweling:
"It employed a within-subjects design in which the subjects of the experiment, U.S. senators, received one letter from a constituent taking a position in favor of immigration reform; and a second letter from a different constituent opposing immigration reform. By comparing how senators responded to these two letters we can identify the frequency with which they tailor their messages to constituents with differing views on this issue, as well as the form their targeted explanations take...."
"It employed a within-subjects design in which the subjects of the experiment, U.S. senators, received one letter from a constituent taking a position in favor of immigration reform; and a second letter from a different constituent opposing immigration reform. By comparing how senators responded to these two letters we can identify the frequency with which they tailor their messages to constituents with differing views on this issue, as well as the form their targeted explanations take...."
Sunday, 16 December 2012
Ethics committees
Every piece of university research involving humans has to go through
an ethical approval process, typically handled by a committee.
History shows plenty of examples of horrendous research on humans. So
surely ethics committees must be good things? Mmm... I am not convinced.
Ethics committees check every piece of research for problems before
it starts. This is not the only approach to prevention. Some
transactions on your local high street may be deceptive, but trading
standards bodies do not check every transaction before allowing it.
Closer to home, there is a danger of fraudulent research, but we do
not check every piece of research for fraudulence. Instead we deal
with problems by after-the-event sanctions and trust them to have
incentive effects.
Scholars studying political oversight of bureaucracies talk about
“police patrols” versus “fire alarms”. Police patrols check
things ex ante; fire alarms go off only when there is a problem –
for example, when a constituent or lobbyist raises a complaint.
Police patrols have their advantages, but they can be massively
expensive and cumbersome. Ethics committees are police patrols.
As well as imposing transaction costs, there is a danger that ethics
committees go beyond their remit and try to control what research
gets done. Ought their terms of reference not stop this? Perhaps in
theory. In practice, often the ethical risks of a particular
experiment must be weighed against the benefit of the research. But
this is a backdoor,
which allows committees to consider what is good and bad research.
The (nice and helpful) people on my last panel assured me that many
researchers found that discussions with them improved their research
design. Undoubtedly that was indeed true, but it is just the problem. If you
are having that kind of discussion with a body which can allow or ban
your research, then you have lost the ability to judge for yourself
what research is worth doing. The benefits accruing to perhaps to 99%
of researchers will be outweighed by the loss from the 1% with an
innovative idea that, like many such ideas, meets resistance from the
status quo.
In Germany, there are essentially no ethical review requirements, at
least for social science research. I strongly suspect that a sample
of German research and UK or US research would find no statistically
significant difference in the level of ethics violations. Ironically, I can find no evidence base for the positive effect of ethical review
on the ethical quality of social science (though there seems to be plenty on its effects on speed of research etc.)
As I have brought up the German case, you may now wish to mention
Nazi medical “research” and Josef Mengele, perhaps the wickedest
pseudo-scientist in history. Be careful with that argument. Do you really think that the problem with Mengele was insufficient oversight by an ethics committee? The Nazis would have controlled the committee too,
because they had taken over German universities. One reason they were
able to do so, I suspect, was that German academia were highly centralized
and authoritarian. So, if you want to protect academia against the
effects of tyranny, make sure it is decentralized and free. Ethical
review processes may risk doing the reverse.
(PS: the new Essex Social Science Experimental Laboratory will scrupulously follow the University’s ethical guidelines,
and all research conducted in it will have passed ethical review, as
well as the Lab’s own strict rules banning deception. These are
just my personal opinions.)
Sunday, 25 November 2012
Voting
The blogosphere debates the rationality of voting. (As usual I am behind the curve here.) Andrew Gelman:
Relatedly, at ESA Tucson I saw Ulrike Malmendier present a field experiment - not currently available online - on why people vote, arguing that it is related to (1) social pressure and (2) the cost of lying. This seems a more hopeful approach than constructing game-theoretic arguments alone - though, NB, the paper combined data with theory to estimate parameters of a model, rather than just directly estimating vote probabilities.
In swing states (or for close non-presidential elections), though, it’s a different story Aaron, Nate, and I have estimated the probability of your vote being decisive in a swing state as being in the range 1 in a million to 1 in 10 million. Low, but not zero, and Aaron, Noah, and I argue that it can be make sense to vote because of the social benefits that a voter might feel arise from his or her preferred candidate winning.Phil Arena:
First, being pivotal to the outcome of your state is not the same as being pivotal to the outcome of a presidential election.Kindred Winecoff:
Even still Arena is giving Gelman's argument more credit than it deserves. In fact, Gelman doesn't have an argument. He simply pretends as if there was a utility function out there such that it would make sense for people to vote at 1/10,000,000 odds (those are only the swing state voters, not the median or modal or otherwise typical voter). So far as I know no such utility function has ever been modeled or tested against peoples' actual subjective utilities, and Arena points out numerous analogous situations in which folks generally behave differently -- getting in a car crash, getting shot while on campus, etc. -- despite similar or better (worse?) odds.Actually, David Myatt has a paper showing that, in a plausible model of voting, one's probability of pivotality is 1/N, where N is the number of voters, and that for some standard utility-based models of altruism, that should be enough to get you to vote (because you are providing a benefit to N people). Warning: the paper is not as easy to read as a blog post. As I understand it, David is not arguing that this kind of instrumental rationality does explain why people vote; he is arguing that it could explain it, and that therefore two critics of rational choice theory from the 1990s are mistaken.
Source: mociun.tumblr.com via jessica on Pinterest |
I remember the 90s!
Relatedly, at ESA Tucson I saw Ulrike Malmendier present a field experiment - not currently available online - on why people vote, arguing that it is related to (1) social pressure and (2) the cost of lying. This seems a more hopeful approach than constructing game-theoretic arguments alone - though, NB, the paper combined data with theory to estimate parameters of a model, rather than just directly estimating vote probabilities.
Subscribe to:
Posts (Atom)