Showing posts with label academia. Show all posts
Showing posts with label academia. Show all posts

Monday, 24 July 2017

New old paper just out


A very old paper of mine and Martin Leroch's has just got published in Homo Economicus. (Ungated, older version.) The topic is reciprocity between groups. Here's a quote from the intro, I've bolded the key point:
At first sight it appears straightforward that people take revenge against entire groups, not only against direct individual perpetrators, even in routine social and economic life. For instance, consumers buy fewer products from countries which they see as politically antagonistic (Klein and Ettensoe 1999, Leong et al. 2008). Further, on days after terrorist bombings in Israel, Jewish (Arab) judges become more likely to favor Jewish (Arab) plaintiffs in their decisions, and Israeli Arabs face higher prices for used cars (Shayo and Zussman 2011; Zussman 2012). On a political level, for instance, Keynes (1922) perceived the Treaty of Paris’ devastation of the German economy as an act of revenge, and quoted Thomas Hardy’s play The Dynasts: ‘‘Nought remains/But vindictiveness here amid the strong,/And there amid the weak an impotent rage.’’ In its most extreme case, revenge against groups may trigger violent intergroup conflict. After an argument between an Indian Dalit and an upper caste farmer, upper caste villagers attacked 80 Dalit families (Hoff et al. 2011). In Atlanta, 1906, after newspaper allegations of black attacks on white women, a group of white people rioted, killing 25 black men (Bauerlein 2001). In both cases, innocent people were made to suffer for the real or supposed crimes of others. Many field studies of intergroup violence report similar tit-for-tat processes, with harm to members of one group being avenged by attacks on previously uninvolved coethnics of the original attackers (Horowitz 1985, 2001; Chagnon 1988).

We started thinking about this back in 2009, I just looked up the email:
Reciprocity towards groups; that's a pretty important idea if it holds, right? (Think about wars, racial discrimination; patriotism...) I don't know if there's anything done in the area. But perhaps it's one for another experiment.
As well as seeming important, it turned out there was basically nothing out there in economics, and only a few papers in psychology.

We ran not one but several experiments, polishing the treatment and figuring out "what works". (There's issues of multiple testing here, but I'll ignore that.)

Our final experiment had some interesting results, and we sent it off to a top journal. It was rejected. Then we sent it off to another journal and... it was rejected. And another, and another.... I was annoyed by this because I felt that this was an important topic that nobody had written about! After all, Chen and Li (2009) had got into the AER by doing a basic group identity experiment, the same thing psychologists had done for decades, and adding incentives.

Yeah, I was naive! There are lots of reasons for the paper not doing well, some good, some bad:
  • The design was complex and hard to explain. We spent ages on multiple rewrites of our design section to make it clear what we had done.
  • In addition, the design and methodology weren't perfect - we were both quite inexperienced. There are things I'd do differently. Of course, reviewers picked these up.
  • Our topic fell between stools: it was an economic experiment on a fundamentally political topic. It is a sad reality that interdisciplinary work is not easy to publish.
  • Relatedly: referees and academics are conservative. It is easier to answer a question they already consider important, than to introduce a new question and persuade them it is important. That's probably reasonable. The dominant themes of any literature are dominant for a reason.
  • Chen and Li's AER paper did what I have since learned is important - it created a building block. It deserves its placement. I still think we were out there doing something quite new, but sometimes you have to lead the academic horse to water.
Anyway, for all that, I still think that intergroup dynamics are under-researched, given that they may be involved in the devastating phenomena we touch on in our intro. So, I'm glad it's finally out!

Here's a picture of the basic result, which I'm sure has been up on this blog before. The slope of the solid line shows subjects' "upstream reciprocity" towards a fellow group member of their most recent opponent in a public goods game. The dashed line is the control, showing reciprocity towards someone in a different group.





Monday, 20 April 2015

Behavioural should not be behaviourist


What differentiates behavioural economics from psychology? A common answer is "we are interested in behaviour".

For example, psychologists studying group identity might use a questionnaire measure, asking subjects "how proud are you to be in your group"? Economists, instead, would test whether they gave more money to their in-group. "Real choices have costs," goes the slogan.

Costly choices are a great experimental tool. But I had an epiphany recently, when a psychologist remarked that economists are using a theory psychologists gave up fifty years ago – behaviourism, the idea that you could ignore the mind and just study how stimuli affected behaviour.

Only studying behaviour makes sense as long as the mind has no "state". If stimulus X always causes behaviour Y, then we only need study Y.  But of course the mind has state: for example, our choices depend on our moods.

We need to study these states directly, so as to improve our theories by clarifying the links in their causal chains. Why are subjects more selfish if the game they are playing is labelled the "Wall Street game" rather than the "community game"? Maybe the market-oriented phrase puts subjects into a materially self-interested mindset. OK, so when does that happen in the real world? For an answer, we will need to measure this mindset directly and learn how it is induced.

This is doable. Yes, people may lie, or misinterpret their own behaviour; but psychologists have been devising valid, reliable questionnaire measures for decades. (FMRI data seems more easily accepted by economists than questionnaires, perhaps because it seems more like "hard" science – or because it is impressively expensive?) We can use these measures, or create similar ones of our own.

Without good theory, experimental economics risks becoming a pile of unorganized, uninterpretable results. For good theory, we need to open and study the black box of the mind.


Thursday, 16 April 2015

External validity in experimental economics


Lab experimenters worry a lot about external validity. OK, we say, it happened in the lab, but would we see it in the real world? Answering this question, by connecting the lab and the field, is a good way to get published.

Most of these papers look at individual-level external validity. For example, were the same people who were trustworthy in a "trust game" lab experiment,  also more likely to (e.g.) repay microfinance loans? Were people who cheated in an "honesty game" (throwing a die and reporting the result where you got paid more for some results) also more likely to cheat by riding without a fare on public transport? *

Policy-makers, though, are probably more interested in external validity of treatments. For example, in the lab, people may be more cooperative if others can observe their behaviour. Will the same thing happen if we (e.g.) gave people badges when they made a charitable donation?

Individual-level and treatment-level validity need not be related. Perhaps the same people are generous in the lab and in real life; but a particular treatment only works in the special atmosphere of the laboratory. Conversely, even if lab behaviour does not reflect a stable underlying trait of individuals, a policy intervention may still affect it. Both questions are interesting but it is important to distinguish them. The real test of external validity is probably: does your policy intervention work?


* Marie Claire Villeval's paper, not online yet.

Update: Ro'i Zultan points me at Vernon Smith's 1981 paper:
What parallelism hypothesizes in micro-economy is that if institutions make a difference, it is because the rules make a difference, and if the rules make a difference, it is because incentives make a difference.

Friday, 10 April 2015

I hate your stupid paper: Al Roth


[This is a new occasional series in which I tell you that your paper is stupid, you are stupid, and I hate you and your stupid paper. My inaugural paper is by... Judd Kessler and Nobel Prize winner and father of experimental economics, Al Roth!!!! Warning: lengthy, for specialists, contains swearing and rhetorical exaggeration.]

Came across this gem while I was doing the prediction market experiment for replications - a cool idea by the way.
Organ allocation policy and the decision to donate
Abstract

Organ donations from deceased donors provide the majority of transplanted organs in the United States, and one deceased donor can save numerous lives by providing multiple organs.... We study in the laboratory an experimental game modeled on the decision to register as an organ donor and investigate how changes in the management of organ waiting lists might impact donations. 
From the paper:

This paper investigates incentives to donate by means of an experimental game that models the decision to register as an organ donor. The main manipulation is the introduction of a priority rule, inspired by the Singapore and Israeli legislation, that assigns available organs first to those who had also registered to be organ donors. ...

Results from our laboratory study suggest that providing priority on waiting lists for registered donors has a significant positive impact on donation. ...
The instructions to subjects were stated in abstract terms, not in terms of organs. Subjects started each round with one “A unit” (which can be thought of as a brain) and two “B units” (representing kidneys). ...
Whenever a subject’s A unit failed, he lost $1 and the round ended for him (representing brain death)...
At this point, I wished fervently for my A unit to fail, representing brain death.

For any non-specialists out there who don't see the problem... fuck it: for the tiny proportion of non-specialists who aren't already laughing at us like baboons.

Organ donation is a complex and unique decision. It involves the choice to have part of your own body cut out, when you die, in the hope of saving someone else's life.

Now it is perfectly reasonable, though counter-intuitive, to model this as just another cost-benefit decision (perhaps including some "altruistic utility"). The sainted Gary Becker did this for crime and the family - both areas not previously thought of as amenable to cost-benefit analysis - and spawned two whole new fields.

And it is also perfectly reasonable to say "No! Organ donation is different. Cost-benefit analysis just won't apply. I don't trust this economic model."

Here's what is not reasonable: to distrust the economic model; and to try to learn what will really happen, by running a laboratory experiment ... which implements the economic model.

Analogy: suppose I have a simple billiard-ball theory of planetary motion. To predict how planets interact, I build a big billiards table with a lot of billiard balls on strings representing the sun, the earth, Mars and so on. I spin the balls, take measurements and write down my predictions. Now you decide my theory is all wrong. In fact, it doesn't even work for the billiard table! You whack the red ball round on its string: it ends up totally not where my theory predicts! Falsification! Karl Popper's ghost applauds.

"Yes," you tell me, "and now just measure the position of that red ball. I want to know where Mars will be next week."

You see the problem? My billiard-ball theory is wrong. But that theory gave the only reason to think that the billiard table could predict the planets. Without the theory, what are we left with? That's right, Perky: balls. A load of useless balls.

Now there are many lab experiments on decision-making that would be relevant to organ donation. We can test theoretical models of, say, altruism and upstream reciprocity. Then, if we reckoned that the theory had captured all the relevant aspects of behaviour, we could apply it to organ donation; make some predictions; maybe try out a policy experiment. The social science lab is useful for this, because you can get "altruism" and "reprocity" into the lab in a meaningful way. But there is no meaningful way to get "organ donation" into the lab, short of a supply of Romanian orphans and a surprisingly relaxed ethics committee. Just having options with analogous payoffs does not cut it.

The authors of course know this. From the conclusion:
Care must always be taken in extrapolating experimental results to complex envi- ronments outside the lab, and caution is particularly called for when the lab setting abstracts away from important but intangible issues, as we do here.
And perhaps the paper's results can in fact tell us something deep about how institutions can tap upstream reciprocity - but that's not what they talk about. Nor do they deal with this head on. (For example, by adding: "It follows that this very interesting experiment tells us nothing about actual organ donation. We were kidding about the title!")  Instead, the introduction uses that weasel word, "suggest".

Roll up folks, for the new experimental methodology! Finally, unbiased causal identification in the social sciences! Drumroll. Spotlight. "Results suggest..." Parturient montes, nascetur ridiculus mus.* If I want suggestiveness, I'll read ethnography.

Here is why this gets my goat. A graduate student once proposed an experiment on global warming. The next century would be a game with 100 rounds. In each round there was a small chance of a "climate catastrophe" if the players didn't implement "mitigation". Mitigation cost a few cents,  climate catastrophe cost about twenty Euros. From this experiment it was hoped to make behavioural predictions about, uuuuh, the future of the planet. Under different policy regimes.

(And - quickly, in one breath - because it was in the lab, the policy regimes were randomly and exogenously assigned. Yeah, thank God there's no endogeneity! That was such a problem with STUDYING THE REAL WORLD.**)

So I stuck my hand up and said that this was nuts. But now, some other young researcher, planning such an absurdity, can say: "Well, Al Roth did it for brain transplants!"
[S]ubjects started each round with one “A unit” (which can be thought of as a brain) ...
 Seriously, how the fuck can people write this shit with a straight face?



* Translated from the Latin, this means "Fuck you and Google it yourself."

** As our authors put it:
The difficulty of performing comparable experiments or comparisons outside of the lab, however, makes it sensible to look to simple experiments to generate hypotheses about organ donation policies.

Friday, 6 March 2015

Things I learned at the Boulder workshop on statistical genetics

  • It is quite straightforward to "edit" a mouse's genome during the critical half hour when the (unborn) mouse is unicellular. Theoretically this would be possible for humans too.
From chatting to someone at lunch. Scary!
  • The cool guys at Genes for Good are collecting data from the general public for scientific use, in exchange for a free reading of your genome (typical cost at 23andme: $100). They'll tell you about your genetic ancestry, and give you your data. You will need a Facebook account and to live in the US. 
The US makes it hard to give people information about their genomes for some reason (regulation? liability law?) but you can probably get this info from somewhere outside the US.
  • GWAS studies have actually been very successful in discovering SNPs* associated with diseases and other traits
There was a period when things didn't look good, and this was reported in the mainstream media. Basically, it turns out that many traits are caused by lots of genes with small effects, which means you need a large sample size to detect them. Now people are working together to share data and create these large samples, and many SNPs have been detected. For instance (IIRC) about 30% of variation in height and 10% of variation in IQ is caused by a specific set of genes.

* Single Nucleotide Polymorphisms, i.e. a place where people's genomes differ by just one "letter"
  • Most behavioural traits are probably affected by many, many genetic variants acting together, each with a tiny effect on the trait.
Partly as a result:
  • SNP arrays (1 million independent variables per individual) are too easy. The cool new thing is sequencing data with billions of independent variables per individual.
The difference is that SNP arrays just store a few informative bits of someone's DNA - like an index to a book. Sequencing data gives you the whole book, and the cost of sequencing is coming down, right now it's at about $2000 per person. Yes, there are serious computational difficulties in doing statistics at this level. No, I won't be using sequencing data (not brave or smart enough).

The workshop is here and if you want to learn statistical methods for genetics it is an absolutely awesome event.

Thursday, 29 January 2015

Grade inflation


The media has noticed the problem of grade inflation in UK degrees (BBC, Telegraph). There are now twice as many first class degrees being given out as there were ten years ago (for non-UK readers, a “first” is the best degree grade).

The economics of this seems clear. A university that gives better grades to its students benefits them in the job market, and also looks better in league tables that count the number of grades students get. It also devalues that university’s degrees, but, since most UK employers cannot distinguish between universities except perhaps Oxbridge at the top, this devaluation is a “public bad” which is shared with all universities... and also with past and future students, neither of whom the short-term-focused administration cares about. Result: grade inflation.

This story is true as far as it goes, but it misses something important. Grades are given by the academic staff who do marking. None of us benefits directly from inflating our students’ grades. The benefit to the university is a public good for each individual academic: why should I care about my university’s score in the rankings?

The real cause of the change is not pure self-interest, but a combination of “bounded ethicality” and its exploitation. I feel loyal to my colleagues in my department, and to my university; whereas UK education as a whole is too abstract and remote to care about. And then, these feelings are played on by the administration. A memo comes round about “using the top end of the grading system more” so as “not to short-change our students”. Your colleagues knuckle under – after all, everyone else is doing it. If you rock the boat or grade too low, then you are told not to make trouble. In this way, a new norm is developed. We shift our ideas about what constitutes first class work, just a little at first....

This is typical. Selfishness is not just about the breakdown of norms; new norms are also created. As Thucydides put it:

Words had to change their ordinary meaning and to take that which
was now given them. Reckless audacity came to be considered the courage of a loyal ally; prudent hesitation, specious cowardice; moderation was held to be a cloak for unmanliness; ability to see all sides of a question, inaptness to act on any.... (3.82)
The promising side of this is that, in economists’ terms, there are multiple equilibria. Individuals will always be tempted by selfishness; but an organization can only act selfishly if the individuals in the organization tolerate this. When the greater society has a strong claim on people’s affections, it is possible to resist organizational selfishness. Let’s hope that UK academics recognize this, and try harder to uphold our standards in future.

Wednesday, 21 January 2015

How Open Access works

1. Before open access

 

You write a paper. It is available for free from your website, SSRN, IDEAS etc. It can easily be found at Google Scholar and read for free.

The paper gets accepted at a journal.


The publishers make you take it down from your website. They publish it on the journal website. They charge large fees to universities for access. People outside universities cannot read it. (Luckily, early drafts are available at the university websites where you have presented the work.)

2. Open access arrives

 

Information wants to be free! Activists demand that journals provide articles on open access so that anyone can read them -- especially if the government has paid for the research.


The government agrees. From now on, government-funded research must be open access.

3. Now with open access


You write a paper. It is available for free from your website, et cetera. It gets accepted at a journal. The publishers offer you an option: publish it as normal, (i.e. stop people reading it) or publish it open access so that everyone will be able to read it. The fee is a snip at £2000. Your research is government-funded, so you (your university) have to pay the fee. Or, if you want it to be included in the next Research Excellence Framework assessment, again you have to pay the open access fee.

Your research is available for free on the journal website. Anybody can read it. It is easy to find at Google Scholar.


 Everybody is happy!