Statistics, social science, and the media

Statistics, social science, and the media

Everybody likes to hear a confident opinion. Take Truman’s wish for a one-armed economist, for example, who would be unable to tell him ‘On the other hand…’. Headlines are often written with a confident certainty. Qualifications and conditions aren’t as convincing. The problem is, it can be difficult to distil even a single scientific study without distorting the complexity of the work.

Unfortunately for scientists (whether social or natural), while people prefer statements delivered with certainty, we are in the business of uncertainty. Our methods are based on scepticism, our conclusions full of caveats. Unlike Truman’s fantasy economist, we do have another hand and are often obsessed with what might be on it.

This is why if someone conducts a study showing one kind of relationship, others will be eager to see if they can find the opposite or no relationship. But this healthy scepticism is sometimes transmitted to the broader public in unhealthy ways.

Wait, what do you mean by ‘healthy’ or ‘unhealthy’?

When someone conducts a study showing one kind of relationship, it’s healthy that scepticism drives scientists to try and find the opposite or no relationship. But the way in which scientific research is reported – from the selection of stories to the headlines used – can create an unhealthy scepticism towards science in general.

For example, when the media focus on an isolated study demonstrating some extreme and unusual finding, which is shortly thereafter contradicted by another, the credibility of science suffers (Ben Goldacre discusses this topic from the two minute mark here, and Dan and Dan demonstrate part of his argument nicely in The Daily Mail Song). This can also often skew perceptions of the balance of evidence (there may many unreported or even unpublished supporting studies for one side of the debate, and few or none for the other).

Consider, for example, all that “lead causes crime” business earlier this year. In case you missed it, in January Mother Jones featured Kevin Drum’s article titled ‘America’s real criminal element: lead’. He argued that lead exposure, particularly due to leaded petrol, was responsible for a 20th century American crime wave. A few days later, Discover ran Scott Firestone’s piece ‘Does lead exposure cause violent crime? The science is still out’.

You could be forgiven for having shrugged your shoulders and walked away. You may just have had time to glance at these headlines, and they make it clear that the pieces were opposed, right? Well, actually, there’s a lot here that both pieces agree on, and this issue shows that even when science is debated in public as it should be, it’s easy for people to walk away with the wrong impression about what’s going on.

Drum and Firestone’s pieces with their contrasting headlines both have figures and cite formal academic research, so doesn’t this just go to show that science and statistics can be twisted to say anything you like? I mean sure, Drum probably found a correlation between lead and crime, but everyone knows that correlation doesn’t imply causation, right?

So did Drum forget that correlation is not causation?

This commonly-known cautionary statement can easily become a refrain for the off-hand rejection of research, even when it points to correlations that are of sufficient interest or importance to warrant future research (as XKCD puts it, “Correlation doesn’t imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing ‘look over there’”). This is a good thing, as it places the burden of demonstrating causation where it should – on those reporting the results.

correlation

Understanding correlation http://xkcd.com/552/

Drum shouldered that burden well. He explicitly addressed the issue, explaining in detail some of the different studies conducted that found different results. He argued that when a range of diverse studies of reasonable quality, some using different measures, all find evidence of the same relationship, then it’s unlikely that it’s just correlation – something more is likely to be at work. And Firestone did not directly criticise Drum for conflating correlation and causation.

Oh. OK. But Firestone must be proving Drum wrong, then, right?

Nope. Firestone didn’t claim to find evidence refuting Drum’s conclusions. Yes, Firestone noted one aspect of one study that showed no evidence to support Drum’s hypothesis. But after this was interpreted by others as a flawed argument based on the absence of evidence, Firestone clarified that the intention was to emphasise that Drum’s conclusion was premature in the absence of more studies tracking individuals over time (as opposed to those comparing different groups of individuals at the same point in time), not to claim that the absence of evidence proved Drum was wrong.

So I guess Drum thinks it’s a proven fact and no more research is required?

Wrong again. In a response, Drum agreed that more research is warranted, though he pointed to the wide range of different studies showing similar results and concluded that the varied research into this topic provides the best explanation yet for the crime wave in question.

Wait – what? Do they disagree or not?

Yes, they do. Basically it comes down to how confident Drum can afford to be on the basis of the available evidence. Drum reviewed evidence from a range of studies, made the case that programs to remove lead from the environment should be undertaken immediately, and portrayed that recommendation as a win-win and a no-brainer.

Firestone felt Drum’s language (referring to the lead-crime link as ‘blindingly obvious’, for example) was overconfident, risked oversimplifying the complexity of the link between lead and crime, and that there was no reason for ‘calling the violent crime link closed with such strong language’. Drum acknowledged that his piece had ‘involved a fair bit of handwaving’, with the intention of provoking further, more detailed research and analysis.

So you’re saying they were just splitting hairs?

Not at all. The differences between their positions aren’t trivial. But in spite of these, the two writers are at different points in the same part of the spectrum of scientific reasoning, rather than polar opposites. These were disagreements at the margin regarding how best to go about strengthening an already compelling case. When challenged, each clarified and/or moderated their positions.

Ultimately, this back-and-forth has been civil, constructive, and a reasonably good example of how the internet can provide a better medium for this sort of back-and-forth discussion than print media. Why? Because it’s easier to go deeper than the headlines and for a reader to check and see exactly what was said where and by whom, and find links to the original research (another point emphasised by Goldacre).

So does this mean we’re entering a golden age of scientific journalism?

Alas, no. No matter how hard people work to communicate science clearly, there will always be the problem of needing a confident headline that will grab attention.

More accurate titles in this case would have been ‘We now have good reason to prioritise and conduct further research into the substantial evidence that lead is America’s real criminal element’, and ‘While there is compelling evidence in support of the lead-causes-crime hypothesis, until we obtain more conclusive evidence of the relationship Drum describes in longitudinal studies, the science is technically still out’. But these are nowhere near as catchy, are they?