Tuesday 24 May 2005

How social science is different from natural science

Some people believe that social science is fundamentally a different kind of enterprise from natural science: in fact, they may even avoid the science word and talk about “social theory” or similar. Others see no difference at all. There is a conventional story which the former group tell to justify their position, and it goes like this. Studying humans is fundamentally different from studying natural objects because human actions have meaning. Therefore, the methods of natural science are inappropriate.

As it stands, this clearly needs a lot of filling out. In itself it has no more prima facie validity than the following claim: studying fish is fundamentally different from studying other natural objects because they live in water. Therefore, the methods, etc. What's so special about meaning? Different kinds of answers are given.

The first answer would be: “to understand human actions, we need to understand their meaning. Understanding is different from explanation and is the appropriate goal for the human sciences: we need to see the internal logic of people's actions. Natural science can't help us here.” This argument is weak. Suppose that, by an application of the laws of supply and demand, plus a certain amount of practical measurement, I can predict what people will pay for corn this season. This does not seem to involve understanding, but it is still very useful knowledge. So, I deny the “appropriate goal” claim. There are lots of cases where we could sensibly wish to know what humans will do, whether or not we understand the why of it.

A better answer: “predicting what humans will do is a matter of finding out what they think they ought to do. But the best way to do that is to work out what they really ought to do. This is not an empirical matter.” There are different ways of parsing “ought” here: in terms of simple goal-oriented reason, or in more complex moral (but perhaps culturally specific) terms. The first approach – if you want X, Y is the best way to get it – is taken by economic theory. Economists of the Austrian school, like Hayek and von Mises, thought that this made economics (the fundamental social science, they thought) into something completely different from natural science. Their thought on this matter is far from pellucid, so this may be a misinterpretation. The second approach involves working out how people think about a subject in a more nuanced and sensitive way, perhaps believing that the structure of ideas which motivate action is different in different cultures, and attempting to trace this structure. This is the approach taken by the notorious cultural studies. There are as many theorists of cultural studies as there are practitioners, but the practical approach can be summed up like this: to understand a society, apply the methods you learnt in secondary school for understanding a novel. Figure out people's ideas – their logic, perhaps the flaws or concealments in that logic, their rhetoric and linguistic tricks.

Both these approaches – yes, cultural studies too – should be taken seriously. But there are better descriptions of what practitioners in the field are doing. For economics (and rational choice theory generally), Daniel Dennett has put it very convincingly as follows. We could predict what humans will do using the methods of natural science, but it would require a very detailed knowledge of the physical state of our test subjects, and might be completely wrong given only small errors in that knowledge. Working out subjects' goals, and predicting their actions as optimal given those goals, is an immensely powerful shortcut. Note the implication. If rational choice theory is a shortcut, then it may in certain fields be wrong. In fact it would be astounding if people always and everywhere behaved with complete optimality. If so, then we need to test our theories, and that means we are well within the fold of normal experimental science. And in fact, most economists try to back up their theories with empirical evidence, and have devoted great effort to working out how to do this – hence the importance of econometrics. With regard to the second school of thought, which prefers sensitive contextual analysis to mathematical simplification, we can give the same story and make the same demand. If you think that understanding is a shortcut to prediction, that's fine, but in any given situation you will need to prove it by doing empirical tests. This need is even stronger than for economists. Most economics is derivable from a hypothesis that humans behave optimally (a less contentious description than “rationally”), plus some descriptions of human goals. And a lot of economics assumes the same basic set of goals for humans: maximize your wealth or your well-being as a risk-averse function of wealth. The hypothesis plus the basic goals have been tested in a wide variety of situations and have performed very well – pick up any journal of empirical economics for examples. So in many areas we could reasonably have a presumption in their favour. The cultural studies approach, by its nature, is less general and more culture- and context- specific. Any specific claims about how humans will act in a situation are going to come with less prior support, because they are less closely related to other previous work. So it is even more important to test them.

One more explanation of how social science is different is worth mentioning. It was put forward by Habermas. We could do social science like natural science. But science is about learning to control and manipulate objects. Learning to control and manipulate humans is ethically dubious. The social science that we ought to do will have “an interest in liberation”. It will therefore not simply aim at explanation and prediction of human behaviour: instead it will help humans understand their situation better. This argument is ethical rather than epistemic. We could aim at explanation, but we shouldn't.

Habermas is right to be worried about the dangerous potential of science, but he is drawing the border in the wrong place. There is physical science, like research into chemical weapons, which is dangerous and ought not to be done (although there may be tragic dilemmas where it is better for “us” to do this science rather than wait for “them” to do it). This science may in predictable ways make human lives worse. Specifically there is physical science that may help the cause of human oppression – the development, say, of torture techniques or truth drugs. On the other hand, explanatory social science, despite making human behaviour more predictable, can aid the cause of human liberation – normally because it may help us to exert collective control over individual actions which harm human freedom. If we can predict military coups, perhaps we can stop them; if we can explain crime, perhaps we can prevent it.

In other words, there is not much in Habermas' argument beyond the old idea that science has huge power for good or ill. I wish scientists were more aware of that and more thoughtful about the implications of their research, but it is physical scientists who are most in need of this awareness. Perhaps childlike curiosity is just the way good science has to be done, and we will always need other people to play the role of worried parent. (We certainly have no shortage of candidates for this role, most of whom, it must be said, are visibly not up to the job.) In any case, there is no reason to divide social science from physical science on the basis of this argument.

So far I have dismissed various types of claim that social science is fundamentally different from natural science. In fact, I don't believe that they are fundamentally different. If you still do, I would invite you carefully to consider the following areas of scientific endeavour, and decide which side of the dividing line to put them, or come up with some explanation of how they can involve work on both sides of this supposedly impassable chasm:

  • epidemiology

  • the study of flows of motor traffic

  • neuropsychology

Having said all that, I do think that there is something rather different about social science – not a dividing wall, but some unusual characteristics. My idea does better at explaining what is unique and different about studying human beings, while paying respect to the underlying unity of science. It also seems to fit better with the kind of worries people have about social science.

The Bayesian image of knowledge is this: a subject has a prior belief expressed as a probability of statement X being true. He or she then receives some new information – perhaps as a result of his or her own actions, such as performing an experiment or analysing some statistics – and updates probabilities accordingly. The Popperian image of science is based on the idea that scientific laws are general statements, that is, claims about an infinite number of scenarios. We cannot therefore prove them: we can only disprove them, and science is just those disprovable statements that have not yet been disproved. A scientist must actively try to disprove his or her own theories by means of experiment. How the Bayesian and Popperian ideas fit together is not completely clear to me, but the important point is the image they have of the subject. This is a unitary image. News from the outside world is instantly integrated into a subject's belief system as priors are updated or laws rejected.

As an explanation of how we ought to find things out, I have no problem with either of these. As an explanation of how people do in fact come by what they call “knowledge”, the Bayesian idea is rather like the idea of rational choice. It is an immensely powerful and useful shorthand for a much more complicated biological process, the mysterious process whereby the human brain acquires beliefs as a result of things happening in the external world and then inside the brain itself. This shorthand is essential. But it has a drawback. It gives a very misleading impression of how human brains really handle knowledge, and this has consequences for how we can realistically expect to do social science.

I invite you to evaluate for yourselves the following claim: you know a great deal about the world. In particular, you know far more about the social world than could possibly be deduced from any theory you explicitly, consciously hold. Here are some examples of what you know, if you are a reasonably well-informed person:

  • Britons are great animal lovers

  • The mobile phone has made social interaction much more fluid

  • When taking a late night minicab, it's wise to agree a price beforehand

  • Nixon was a crook

Et cetera. We know much more than is in our minds at any one time. Human beings are not like CPUs receiving inputs of facts and updating a central belief system: they are more like sponges of memory, impressions and information which is stored away willy-nilly, sometimes easily accessible, sometimes nearly forgotten.

So what? Well, for physical science, so not very much. Physical science more or less starts where our common knowledge of how physical objects work leaves off: Galileo pointed out in his dialogues that, counter-intuitively, objects dropped from a ship's mast in motion did not fall backwards relative to the ship. In other words, very little of our existing lay knowledge is much use when we are doing physical science. The important knowledge is in the existing corpus of physical science, which has already been carefully digested and squared with scientific theories. (Although even here, it is often true that a new theory appeals because it provides an elegant explanation for an already-known fact.)

The situation is very different for much social science. Our explicit knowledge of physical objects is rather limited; our implicit knowledge, that displayed by a footballer or a hunter, is perhaps much greater, but it cannot be put into statements and made available to test our theories. But humans are experts on society. We know how people work in general, and what people have done in particular. And we are able to put that knowledge into words, in order to pass it on to others. We are shrewd observers, and fabulous gossips. (How could we not be, when evolution demanded it?) So, when somebody puts forward a theory, or when we put one forward ourselves, we already have a huge bank of existing knowledge to draw upon in order to test it. What is more, in my experience as a social scientist, the most illuminating theories have been those which sum up my existing knowledge in a new and elegant pattern. That is how I felt when I discovered the idea of cognitive cascades, for example, or Olsen's logic of collective action.

This does not make the fundamental epistemic structure of social science any different from that of natural science. It is still a matter of testing our beliefs against the facts. But it does mean that many of the facts are there for us already. We do not in the first instance need to go out into the world and do experiments because we can falsify many theories off the top of our heads.

A straightforward consequence of this is that much conventional social science does indeed have the fault, that lay observers often ascribe to it, of being an attempt to prove the bleeding obvious. Examples are too common to need multiplication: the research that confirms that men do less washing up than women; that American community leaders are better educated than average. These are things we already knew. An equally straightforward consequence is that a lot of the most interesting social science does not busy itself too much with evidence-gathering, but just lays its claims out and leaves them to be backed up by what economists call casual empiricism.

At this point objectors will be queuing up. Let me dispose of one objection and accommodate some others.

A strong objection is as follows. “You say that we have this vast bank of existing knowledge. But unless it has been through the scientific process, it isn't knowledge at all, but just belief. For example, you claim that Nixon was a crook/that Britons are great animal lovers. But this is a highly contentious claim. I, a social/political scientist, have rigorously evaluated the evidence and discovered that Britons love animals no more than other nations/Nixon was the paragon of high-minded patriotism.” As an aside, it must be said that social scientists like nothing better than to claim to possess knowledge unavailable to the well-informed lay observer; this esoteric knowledge, all too often, turns out to be a contentious or overblown reinterpretation of the existing public facts, but it serves its purpose of making us feel important and knowledgeable. Never mind. Isn't the objection valid? How can we really be sure that our beliefs about the world aren't fundamentally mistaken, and isn't that the point of doing social science?

My answer is given originally, I believe, by Donald Davidson (though I may well be bowdlerizing him terribly). It is perfectly true that some of our beliefs can be mistaken, and we would be arrogant to expect anything else. The mistake comes in generalizing from that and thinking that all, or even the majority, of our beliefs can be mistaken. The great majority of our beliefs must be true: otherwise, they could not be identified as beliefs at all. Somebody who is systematically mistaken about everything would be unintelligible to us. We only come to understand other people's beliefs on the basis of some shared truths which we can assume they are referring to. This is true both for the anthropologist struggling to understand a foreign language, and for the ordinary conversation in which a shared grammar and vocabulary are not enough to guarantee correct communication of meaning. (Suppose grandma starts to talk about how Churchill fought the strikers, and won the war in the Falklands. At first you mildly remonstrate that it was just a battle in the Falklands. Then you realize she's confusing Churchill with Thatcher.)

Of course this doesn't mean that humans cannot be systematically wrong about quite large areas of belief. Science in general would be pointless in that case. For example, people routinely forget about the knock-on effects of taxes: most people instinctively assume that an increase in the top rate of tax will bring in more money, but it isn't necessarily so. And in general, much of economics is built upon working through the collective consequences of individual behaviour more rigorously than we are used to do. Similarly, there are quite well-known and predictable theoretical mistakes and practical misjudgments that people make – they take account of sunk costs, or of the framing of a particular problem. (Tversky won his Nobel prize discovering some of these.) I am merely saying that in many areas of social science, humans (and particularly the well-educated social scientist) have the facts already.

This first objection, then, is overblown. But some others would be quite valid. So, I am not suggesting that we should never explicitly test our theories. In fact, everything should be tested – I am just suggesting that we have an existing mechanism already in place for doing it. But even so, there are plenty of claims which do need explicit testing, where our intuitions are not enough or do not exist at all. And no claim can possibly suffer from being tested. My argument is modest. We are working with an image of science that does not quite fit what we actually, inevitably do. Social scientists don't think up theories out of the blue, by random inspiration, then test them. 9 out of 10 random thoughts can be rejected at once, and the remaining good thought often makes such intuitive sense – fits so well with our existing, highly accurate and nuanced knowledge about the world – that the testing is a formality. I wish that social scientists would give more priority to the generation of what is insightful, and less to the rigorous testing of what is obvious or nearly obvious.


There is a further point which comes from considering what social science is for. Rather often, we are not in fact making claims about an infinite universe. Instead, we are examining how a particular process actually turned out at a particular time in history. For example, we can make scientific claims about “elections” as a general phenomenon, claims that need to be validated every time people vote on a course of action. But just as usefully, we can try to understand British elections or American elections – particular finite sets of events under specific historical conditions. When we do the latter, we are not talking for the benefit of eternity, but trying to help people understand what is going on right now. Here again, rigorous testing may be less important than finding the non-obvious. Intelligent public opinion thinks that Iraq damaged Blair and Labour in the last UK general election. Public opinion might be completely mistaken. But, if you are looking to expand the public mind, there may be better avenues to explore than testing this hypothesis to destruction. I hope I am not slaughtering anybody's baby out there.

The mainstream of social science has rejected the arguments of those who want to cut off the study of society from the methodology and ideals of science as a whole. I rejoice at that. It's a wonderful feeling to be part of such a grand collective endeavour – much more fun than being a big fish in a small pond. And the empirical sciences of man have many interesting results, and are getting more interesting by the day with our deeper understanding of human nature. But this fundamental unity shouldn't force us to think of what we do in ways borrowed from particle physics. Like all the sciences we have our own particular ways of working, and our own place in the wider universe of useful speech acts. That's really worth celebrating.

No comments:

Post a Comment