Showing posts with label psychology. Show all posts
Showing posts with label psychology. Show all posts

Tuesday, 26 May 2015

On excluding people from science


Following up his article on mathiness, Paul Romer argues in a blog post that:
  1. Science has its own set of norms.
  2. These norms are in danger from people with different norms, such as those of politics.
  3. We need to exclude people with the norms of politics from scientific debate.
Points 1 and 2 are right. "Norms of politics" is not a good phrase to describe argumentative bad faith: political debate at its best is much more enlightening than that, and 'anti-politics' populism ought not to be indulged. But science does depend on norms, which must always be defended – particularly when a lot of money is being thrown at the discipline, creating incentives for shoddy work.

From a psychologically informed perspective, point 3 seems very risky. Humans are prone to many self-serving biases, which lead us to discount evidence against our views. If we truly cannot assume that reasonable scientists may differ, then we are already in the bad equilibrium. Until we are sure of that, then it is better to try hard to consider the other side's point of view, and, as Oliver Cromwell once begged of some argumentative theological scholars, "to think it possible you may be mistaken."

Professor Romer's targets in the mathiness article are people with whom has substantive disagreements. (He fairly admits that this may be bias on his part.) His blog post also caricatures Milton Friedman's famous article on scientific methodology, which has certainly been criticized before, but which is still worth reading and thinking hard about. I am not persuaded these are good targets for exclusion and shunning.

Scientific norms, any norms, do not usually fall apart because bad guys sneak in and undermine the honest people. The process is more insidious. We are all subject to pressure to publish, and the desire for fame and status; and most standards involve judgment, and grey areas. (Example: if we run many regressions and only report some of them, we risk biasing our p-values towards significance. But this does not mean we should always report every single regression we ran on some data.) These grey areas create "moral wiggle room" for us to weaken our own standards. We should all take care to avoid that, but conversely, very few of us can afford to be smug about our standards, because the pressures we face are so universal.

Frauds and cheats should certainly be excluded from science. But "mathiness" seems to be more of a grey area. All theoretical papers use simplification; all of them, I suspect, have some degree of imprecision in the map from real world concepts to parts of the formal theory. (And imprecise theories are not always bad, but that is another story.) If we shun people on this basis, we risk preferentially shunning our intellectual opponents. This itself can turn science into a dialogue of the deaf, of communities who only cite each others' papers, not because they are "cults" but because it is easier for all of us to stay in our intellectual comfort zones and receive the praise of those whom we disagree with (in my experience, every scientific community believes six impossible things before breakfast).

The best place to start is by ruthlessly excluding mathiness and other dubious practices from our own work. Computer programmers say: "Be conservative in what you do, be liberal in what you accept from others."


Sunday, 26 April 2015

An experiment on overconfidence


A nice experiment was presented at NIBS last week – Zahra Murad, "Confidence Snowballing in Tournaments". (No paper available yet.)

The talk started from a psychological idea: people are sometimes overconfident about their own performance, and get more overconfident after a few successes. This might explain, say, the overweening confidence of CEOs and "Masters of the Universe" bankers.

In the experiment, subjects competed, in pairs, on a task which could be either easy or difficult. Then winners were matched with other winners and played again – like a football tournament. (Losers were matched with other losers.) The winners of the second round played each other again, and so on.

Before each round, subjects had to bet on their own performance. Fro this, we can learn what chance they gave themselves of winning the round.

The beauty of this design is that having won a round tells you nothing about your chance of winning the next one, because you will be playing someone else who has also just won! So, a reasonable person would not get more confident after winning a round.*

In fact, on easy tasks, winners did get more confident. On difficult tasks, losers got less confident, also wrongly and for much the same reason.

What's good about this experiment?
  • It is real "behavioural economics"
It puts an insight from psychology, the hard-easy effect, into a social setting. So, it is not just psychology. But it is theory-driven: it is not trying to draw inferences about a real social situation directly from behaviour in a feeble laboratory pastiche of that setting.
  • It uses the lab to create an elegant simplification
In the real world, it's hard to judge how much somebody's confidence ought to increase after, say, making millions off a deal. In the experiment's stripped-down paradigm, there's a natural baseline: every time you win, you are rematched against other winners, so you should not get more confident.

In experiments, just as in formal models, "it's realistic" is a terrible reason to add a feature. These guys put in only what was needed.
  • It makes a nice parable...
Among the many things lab experiments can do – test theory, estimate psychological characteristics, explore institutions – is to serve as "parables" or "existence proofs".

Greek parables, like that of Icarus flying too close to the sun, have survived the centuries because they tell us about recurring patterns, helping us to recognize them in life and history. The Icarus myth's pattern is: hubris, insane arrogance, leads to nemesis, divine revenge.  Hubris and nemesis are still all around us (hullo neo-cons! hullo Eurozone!) You would not expect them always to happen – imagine a foolish political scientist estimating the per cent prevalence of hubris in international relations – but it is useful to know that they can.

Experiments can be modern parables.  Zimbardo's prison guards and Milgram's torturers have entered the folklore. They don't always apply, but they are things that can happen. (Hence "existence proof": an experiment shows, irrespective of external validity, that something has happened at least once.)
  • ... and fleshes it out
Parables are fine and underrated, but social science must investigate phenomena, not just exhibit them. The experiment shows social interactions can cause overconfidence, and also explores when it happens (easy tasks) and when other things such as underconfidence can happen. This opens up some avenues for real world study. (If only easy tasks make people overconfident, then what explains the presumed overconfidence of very successful people whose work seems quite difficult, like investment bankers?)

I see a lot of experiments and think either "this was obvious", or "I don't know what behaviour here tells us about the real world". This work passes these hurdles.


* Nerd note: there could be some set of priors for which a Bayesian updater would get more confident. So, increasing confidence is not necessarily irrational in the technical sense, just "unreasonable" in a common-sense way.

Monday, 20 April 2015

Behavioural should not be behaviourist


What differentiates behavioural economics from psychology? A common answer is "we are interested in behaviour".

For example, psychologists studying group identity might use a questionnaire measure, asking subjects "how proud are you to be in your group"? Economists, instead, would test whether they gave more money to their in-group. "Real choices have costs," goes the slogan.

Costly choices are a great experimental tool. But I had an epiphany recently, when a psychologist remarked that economists are using a theory psychologists gave up fifty years ago – behaviourism, the idea that you could ignore the mind and just study how stimuli affected behaviour.

Only studying behaviour makes sense as long as the mind has no "state". If stimulus X always causes behaviour Y, then we only need study Y.  But of course the mind has state: for example, our choices depend on our moods.

We need to study these states directly, so as to improve our theories by clarifying the links in their causal chains. Why are subjects more selfish if the game they are playing is labelled the "Wall Street game" rather than the "community game"? Maybe the market-oriented phrase puts subjects into a materially self-interested mindset. OK, so when does that happen in the real world? For an answer, we will need to measure this mindset directly and learn how it is induced.

This is doable. Yes, people may lie, or misinterpret their own behaviour; but psychologists have been devising valid, reliable questionnaire measures for decades. (FMRI data seems more easily accepted by economists than questionnaires, perhaps because it seems more like "hard" science – or because it is impressively expensive?) We can use these measures, or create similar ones of our own.

Without good theory, experimental economics risks becoming a pile of unorganized, uninterpretable results. For good theory, we need to open and study the black box of the mind.


Wednesday, 25 March 2015

Mark Duggan: "the best lack all conviction"


Here is the Independent Police Complaints Commission report on the death of Mark Duggan which sparked the 2011 riots in the UK.

I have just listened to his brother being interviewed on the Today Programme, attacking the report and the IPCC.

My first feeling was sympathy for a grieving brother. My next was: "Mark Duggan was a gangster - perhaps his family are no better...." Do I know Mark Duggan is a gangster? Several newspapers have said so. And then there is this photograph:


This photo is devastating - the kind of picture a defence lawyer would hate. The staring eyes, the ring and chunky bracelet, and of course the gun gesture, are all worth a thousand words. Plus, the fact that the subject is black. (Did that affect you? No? You sure?)

All of this activates a whole bunch of stereotypes in my brain. Those stereotypes then get transferred by association to Mr Duggan's family.

None of this seems like an ideal way to make a difficult judgment.

But what alternative do I have? After all, my thoughts on the other side are equally stereotypical. "You can't trust the investigation - the Metropolitan police are notoriously corrupt." How does that idea in my head link to the true level of corruption in the Met, which is - almost by definition - more than nothing but less than total?

Knowing a little of the psychology of belief makes me think that Yeats quote, "the best lack all conviction," is best read as a definition. The surer you are of yourself, the likelier you are to be wrong.

(I haven't read the report either. It's 500 pages long!)