Science from both sides – the INTERVAL study

As a regular blood donor, I was intrigued when I was invited to take part in a study on the effects of blood donation frequency. Apparently there is not much solid data on what blood donation intervals (between donations) are safe for the donor. And the recommended guidelines differ significantly around the world.

The INTERVAL trial assessed the effects of different  blood donation intervals. Participants, over 45,000 of them, were randomised to 8, 10, or 12 week intervals for men, and 12, 14, or 16 weeks for women for two years (I was an “8-weeker”). The results have now been published in the Lancet, and make for interesting reading.

The first finding (and I have to say I didn’t realise this was even one of the study aims) was that increasing the frequency also increased the amount of blood donated significantly. Adherence to the study was good, and participants also donated much more than they had in the past two years.

The impact on the health of the participants was what interested me most though. There wasn’t any change in self-reported general wellbeing measures. But “more frequent donation resulted in more donation-related symptoms (eg, tiredness, breathlessness, feeling faint, dizziness, and restless legs, especially among men…)”. And additional donations also led to “lower mean haemoglobin and ferritin concentrations, and more deferrals for low haemoglobin”. So donating very frequently isn’t exactly good for you, which does make sense.

So I was happy to take part, and pleased to read the results. From a quick read of the Lancet article, this seems like a well-designed and analysed study, and importantly large enough to provide robust results on an important topic. If only more of medical science was like this (or indeed any science about humans…).

Advertisements

Irony is still alive

It shouldn’t come as a surprise that psychological studies on “priming” may have overstated the effects. It sounds plausible that thinking about words associated with old age might make someone walk slower afterwards for example, but as has been shown for many effects like this, they are nearly impossible to replicate.

Now Ulrich Schimmack, Moritz Heene, and Kamini Kesavan have dug a bit deeper into this, in a post at Replicability-Index titled “Reconstruction of a Train Wreck: How Priming Research Went off the Rails”. They analysed all studies cited in Chapter 4 of Daniel Kahneman’s book “Thinking Fast and Slow”. I’m also a big fan of the book, so this was interesting to read.

I’d recommend everyone with even a passing interest on these things to go and read the whole fascinating post. I’ll just note the authors’ conclusion: “…priming research is a train wreck and readers […] should not consider the presented studies as scientific evidence that subtle cues in their environment can have strong effects on their behavior outside their awareness.”

The irony is pointed out by Kahneman himself in his response: “there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples.”

So nobody, absolutely nobody, can avoid biases in their thinking.

Progress with p values – perhaps

The American Statistical Association (ASA) has published their “statement” about p values. I have long held fairly strong views about p values, also known as “science’s dirtiest secret”, so this is exciting stuff for me. The process of drafting the ASA statement involved 20 experts, “many months” of emails, one two-day meeting, three months of draft statements, and was “lengthier and more controversial than anticipated”. The outcome is now out, in The American Statistician, with no fewer than 21 discussion notes to accompany it (mostly people involved from the start as far as I can gather).

The statement is made up of six principles, which are:

  1. P-values can indicate how incompatible the data are with a specified statistical model.
  2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
  3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
  4. Proper inference requires full reporting and transparency.
  5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
  6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.

I don’t think many people would disagree with much of this. I was expecting something a bit more radical – the principles seem fairly self-evident to me, and don’t really address the bigger issue of what to do about statistical practice. That question is addressed in the 21 comments though.

It probably says something about the topic that it needs 21 comments. And that’s also where the disagreements come in. Some note that the principles are unlikely to change anything. Some point out that the problem isn’t with p-values themselves, but the fact that they are misunderstood and abused. The Bayesians, predictably, advocate Bayes. About half say updating the teaching of statistics is the most urgent task now.

So a decent statement as far as it goes, in acknowledging the problems. But not much in the way of constructive ideas on where to go from here. Some journals have banned p-values altogether, which sounds like a knee-jerk reaction in the other extreme direction. I’d just like to see poor old p’s downgraded to one of the many statistical measures to consider when analysing data. Never the main one, and definitely not the deciding factor on whether something is important or not. I may have to wait a bit longer for that day.

Thoughts on finally reading Bad Pharma

I finally got round to reading Ben Goldacre’s book Bad Pharma. I’m not going to go into too much detail about the book, but if you have even a passing interest in medicine, public health and the costs of providing it, or your own health, I would highly recommend that you read it too. These are some of the random thoughts it raised in me.

First, you only had to know medical students at university, and see what freebies they got from drug companies, to know that something was up. And I’ve long been interested in publication bias. So I was already at least aware of most of the issues, but the book is still quite a catalogue of all kinds of rogue behaviours by many actors. Pharma companies misbehave of course, but so do drugs regulators around the world (I was probably most surprised by just how useless – and even worse – they are), professional bodies, journals and their editors, patient groups, doctors and academics. There is a lot of money going around, and therefore corruption both big and small, explicit and implicit. Nobody comes out too well in this story.

Second, since my backgroud is in banking, I couldn’t help making comparisons. Bankers misbehave too, no doubt about it. Both industries are heavily regulated, but the regulators have in both instances been fairly comprehensively captured by industry interests. In bankers’ defense, when they fiddle LIBOR rates, some other financial company may lose a few million dollars, but when drug developers intentionally hide adverse data about their products, thousands of people will die. So why is there so much less outcry about pharma? It is probably more complex to understand publication bias than lying about benchmark rates. And the deaths are isolated and hidden from view, whereas the financial crisis was very much visible on every high street.

And finally. In passing, Goldacre says something along the lines of “just because there are issues with medicine, it doesn’t mean that alternative medicine works”. Sure. But the opposite works as well: just because homeopathy doesn’t work, it doesn’t mean that (“traditional” or whatever you want to call it) medicine necessarily works any better. To be clear, I don’t think there’s any physical way that homeopathy works. But you can also be prescribed medicines by your GP that are not much better than placebo, if at all. So when a lazy skeptic rants about homeopathy and “the scientific method”, they should always be reminded that there is science and then there’s cargo cult science. Medicine is beginning to look more and more like a cargo cult than the real thing – there are journals, trials, complex statistics etc, but if it’s all based on smoke and mirrors then what do you really have that you can rely on? My attitude is to pay more attention to studies of how science is actually done: its history and sociology, and not just what scientists say in after-dinner speeches. The reality is always much more messy than “hypothesis, test, replication”.