A few bad apples or rotten to the core?

An article in Nature on the ethics of bankers has been widely reported. The researchers asked bankers and non-bankers to toss a coin in private, and then report the numbers of heads and tails – the higher the number of one of the two, the bigger the payout they received. This is a commonly used setting to study honesty. Even though the researchers will never know how honest each participant was individually, overall the amount of heads and tails should be 50/50 if everyone reports their results honestly. Any major deviation from the expected figures indicates some foul play. And on the whole, people tend to be remarkably honest in studies like this; definitely more honest (and therefore poorer as a result) than standard economic theory would predict.

An interesting twist in this particular study, however, was that half of the bankers were primed by asking them questions about their work before the coin tossing; in effect they were reminded what they did and where they worked. And that changed the results significantly. Without the priming, the bankers were as honest as everyone else. But with the nudge about their livelihood, they became much less honest. The effect was clearly significant (in all senses of the word) in the overall average cheating figures.

But what hasn’t been noted in the newspaper articles I’ve seen, was the change in the distribution of the results? Did all bankers become a bit more dishonest, or did some become a lot more crooked? Well, the original article shows the entire distributions for the control and treatment groups, and it seems to me that the answer is: a bit of both. The entire distribution of reported results shifts slightly, resulting in maybe 10% more in rewards than the participants should have received. But there is also a massive increase in the number of people who claim to have got nothing but heads (or tails, whichever gave them the reward).

In conclusion then, based on this study, most bankers in their natural habitat are probably a little bit naughty, but some of them are really quite seriously bad.

Advertisements

Eating behaviour change

With my Club Soda hat on, I attended some panel discussions at a Food Matters Live event last week. My first session was titled “Behaviour change: societal or personal accountability?”. There was a lot of talk about nudges, especially from a government policy point of view (possibly due to the backgrounds of the panel members). An interesting nugget mentioned by Michael Hallsworth from the Behavioural Insights Team was that bad examples spread better in human networks than good examples, since they give people a kind of social permission to misbehave if their friends and neighbours do so as well. At Club Soda we are not quite that pessimistic, believing rather that a social element is very important in changing human behaviours for the better.

The second session, “Health and wellbeing: the trillion dollar marketplace”, had some useful things to say about marketing, and behaviour change too. Adam Ismail from GOED said you should have just one big idea in your marketing, as including too much detail won’t be taken in or trusted. Someone added that you should not give too much detail to consumers, but rather just appeal to authority (“this product has been evaluated by doctors”). And Adam noted that every seafood initiative around the world has led to people eating less fish, which chimes with what we know about alcohol awareness: people in pubs with posters about the dangers of alcohol actually drink more, not less! In a similar way, rewards (e.g. insurance discounts) seem to work better than penalties (such as taxes).

On my way out, I chatted with the people at the Vegan Society stall. Their mission, or at least a part of it, is of course a type of behaviour change intervention as well: how to make people use less animal products? Not an easy one either.

As an aside, there were hardly any empty seats for the first session, whereas in the second one the venue was only about one third full. Is a bit of behavioural nudging really that much more exciting than a trillion dollars?

A start-up day job

I’m currently working more or less full-time for an exciting start-up company called Club Soda. The business aims to help people to change their drinking (of alcohol), whether they want to cut down, stop for a bit, or quit. Our v1 website went live just a couple of weeks ago, so it is all still very early stages, but there are some interesting behaviour change tools and techniques already on the site, with more to come over time.

My job title as “Behavioural economist” is a bit misleading in some ways, as I do a lot more than just behavioural stuff at the moment. But we have exciting plans to get more into developing and evaluating various methods of helping people change their drinking behaviours. I will keep sharing more of this here as well as it happens.

Or maybe most people just don’t understand statistics (p<0.01?)

Statistical hypothesis testing has always been close to my heart. I’ve long been critical of the use of p values, especially as most people seem to misunderstand their interpretation. I may even have failed a job interview once due to my stance on this. I suspect my interviewers didn’t believe that I knew what I was talking about when the subject came up.

This week, I read another two papers on this theme, one in finance, one in psychology. The first, Evaluating Trading Strategies by Campbell Harvey and Yan Liu looks at empirical evaluation of stock trading strategies. It is a nice illustration of the usual pitfalls of data mining – the bad kind where you do so many tests that some are bound to be statistically significant by chance alone – and has a useful discussion of ways of correcting for such multiple testing.

The second, Kuhberger, Fritz & Scherndl’s Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size samples a thousand published psych articles and finds a negative correlation between sample size and effect size, and the usual clustering of results at just within the statistically significant level. In other words, suggesting massive publication bias problems.

These two papers aren’t anything revolutionarily new, as this stuff has been known and talked about for ages. And yet, lots of people still fall into these traps, especially when it comes to publishing their work, and evaluating previous findings (for example doing a meta-analysis). My first thought was that it must be due to institutional factors: academics must publish, so they will fish and mine and p hack until they find something statistically significant to publish, and journals need to fill their pages somehow so they stick to the old rules, even though everyone knows that it’s all a bit of a sham and not to be trusted.

But then I came across Hoekstra, Morey, Rouder & Wagenmakers’s Robust misinterpretation of confidence intervals. This one interviewed 120 psychology researchers and 442 students about their understanding of confidence intervals for a simple hypothesis test. Both groups were equally misinformed about the interpretation of the test results, such as confidence intervals and p values. And what’s more (quoting from the abstract): “Self-declared experience with statistics was not related to researchers’ performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever.”

So maybe there’s more to this than just publish or perish?

big health data

Last week I attended the Operational Research Society’s Data Science: The Final Frontier – Health Analytics event (hashtag: #bighealth) at Westminster Uni. Two of the six presentations were worth noting.

Cono Ariti from The Nuffield Trust spoke about predictive risk modelling in health care. He mentioned the “Kaiser pyramid”, which is the old 20/80 rule, slightly expanded, saying that 3% of patients make up 45% of health care costs. The next 13% are responsible for another 33%; added up, these are approximately 20/80!

And he made two important points to keep in mind with health analytics. First, just building a model is useless without corresponding interventions in place. In other words, if you identify patient segments, say, you also need to have suitable treatments available for them. And secondly, that regression to the mean is a major issue in this area: many people get better by themselves, without any treatment at all. This will complicate evaluations between treatments (and no-treatments), since a large number of patients in all groups, whether treatment or control, may improve significantly. And any differences between control and treatment groups may be very small and difficult to identify.

The second interesting talk was more of a blue sky horizon scan, from Rob Smith at IBM. He talked about the future of health analytics, noting the differences in people of different ages when it comes to tech, gadgets, and privacy, and consequent health behaviours. He also talked a bit about the data issues around genomics, and more about what IBM is doing with Watson. For example, it gets fed as much medical literature as possible, so that it can propose not only treatments to match symptoms, but also suggest new research avenues. Very impressive stuff, and potentially useful in things like cancer treatment which is getting very complex. So much so in fact, that my conclusion was to ask whether artificial intelligence is now the only thing clever enough to handle modern medicine?