Some time ago I started seeing a psychotherapist, a Jungian whom a friend had recommended. My excellent research assistant, a psychology PhD, was surprised and scornful: “You realise that’s not real scientific psychology?”
Jung with pipe |
She was right, of course. Jung is taken no more seriously than Freud by modern psychologists. There’s no evidence that Jungian psychology is practically effective either. Until the rise of cognitive behavioural therapy, no school of therapy did better than any other in scientific trials, or even better than just talking to a friend. With apologies to lay people, we can write this down in an equation:
ATEJung = E[x | J = 1] – E[x | J = 0] = 0 (1)
where x is mental health, E[x | J = 1] is the
expected level of a person’s mental health given a spell of Jungian
therapy, and E[x | J = 0] , is the expected level of their health after no treatment (or, say, after some more reasonable control, like talking to a friend). ATE is the Average Treatment Effect, the average effect on someone of having a Jungian therapist; equivalently, the difference between their health after Jungian
therapy and after the alternative.
But I stayed with my therapist all the same. My RA was right
to be shocked at such an unscientific attitude, no?
Some things about my guy seemed to differentiate him from
the average therapist, Jungian or not. He was extremely intelligent, thoughtful
and calm, and I’d developed a warm relationship with him. I felt that I’d
learned some things about myself and perhaps this was helping with my problems.
Here’s an equation for what matters to me:
TE[i] = (xi | J = 1) – (xi | J = 0) (2)
where TE[j] is the treatment effect on the psychological
health of unit j, the difference between their psychological health after seeing my
therapist and after the alternative; and i
represents myself. Equation (1) is just the average of equation (2), taken over
some appropriate population of patients and therapists. And if it is estimated right,
then your best guess of (2) for a random individual and a random therapist is
(1); in other words, by scientific standards Jungian therapy is useless.
But of course, I am not a random individual to myself, and
my therapist is also not randomly chosen. I know or believe many things
about me and him, which may lead me to a different estimate of (2). Some of
these will be the data of my own experience, others will be intuitions, or
perhaps what I’ve heard from my friend. It’s not obvious how I
should deal with the scientific information embodied in equation (1). It is not something I should just ignore, and it certainly comes out of a more
careful and objective process than my own scraps of intuition and gossip. But
that does not mean those scraps are worthless. Very little of the knowledge we live by day-to-day is scientific,
but we get by well enough.
These ideas are relevant to the debate on expertise. Here’s
Simon Wren-Lewis on expertise:
In reality ignoring expertise means dismissing evidence, ignoring history and experience, and eventually denying straightforward facts.
With respect, this is one-sided, and even
arrogant and dangerous [1]. For instance, a person who worries that their job may be taken by a
migrant is not proved wrong by even theoretically perfect research showing that
immigration on average does not
reduce native employment [2]. Yes,
people can be misled by xenophobia or biased newspaper reporting. They may also
know specific things about their town, or their company, that researchers do
not. Those pieces of knowledge will not have been reached by careful scientific
experimentation. But decentralized, embodied information about specific
particular conditions is, among other things, what makes free markets work [3]. If all knowledge were expert knowledge, socialism would have outrun
capitalism.
Another point is specific to social science [4]. Humans live in
history, which is a river that you cannot step in twice: conditions are always
changing. What we are really interested in is the effect of certain policies in
future. But the only data we have is from the present and the past. Statisticians
understand the risk of extrapolating from the data – assuming that something’s
behaviour will remain predictable in conditions beyond the boundaries of what
one has so far observed [5]. Well, if time is a relevant variable, all social
science is extrapolation from the future to the past, and sometimes it fails. Relationships
that once held cease to do so, perhaps suddenly. To understand such a world,
the observer often has to make a choice: gather a respectably-sized sample,
perhaps reaching back far into the past [6]; or look
at what’s happening now and make a risky but relevant guess. Past averages; or
straws in the wind?
This often divides scientists from journalists. Social
scientists want to make well-founded generalizations and are trained to pay
little regard to journalists’ anecdotes. Journalists can legitimately retort that
they have a better instinct for what matters today. Neither side is always
right. I haven’t mentioned yet how little we truly know, perhaps
how little there is to know, about many vital matters of macro social science.
Put it this way: until they are a little better at predicting financial crises,
or the short run effects of Brexit, economists will fulminate in vain against journalists who don’t take their other predictions seriously.
The idea that everything to be known must be known by
scientific methods has a name: it is called scientism.
But scientism is not scientific.
Notes and references
[1] Incidentally, Professor Wren-Lewis gave the choice of Corbyn as Labour
leader as an example of ordinary people (Labour members) ignoring
expertise. I also used to think that was a bad idea for Labour. Neither of us look very expert now, do we?
[2] There’s a debate between George Borjas and others [1, 2]
on migration, which hinges, among other things, on how much to "borrow
strength" between different social groups, so as to predict one group's outcome from another's.
[3] Here is Hayek's classic argument about markets, "The Use of Knowledge in Society". It's short and easy to read.
[4] This is why Professor Wren-Lewis is wrong to argue that ignoring experts on Brexit is "exactly equivalent to giving considerable publicity to a report from some climate change denial outfit". The equivalence is a bit looser than that.
[5] Gary King has a good explanation of the statistical danger of extrapolation.
[6] A good example is the very interesting dataset of financial crises collected by Reinhard and Rogoff for their book This Time Is Different. As their subtitle boasts, it reaches back through Eight Centuries of Financial Folly. It was certainly wrong to think the noughties' boom economy was different from any previous period, but it might reasonably be different from the conditions of the fourteenth century.
The “river you cannot step in twice” line comes from the Ancient Greek
philosopher Parmenides, who said that you cannot step in the same river twice.