Last week I attended the Operational Research Society’s Data Science: The Final Frontier – Health Analytics event (hashtag: #bighealth) at Westminster Uni. Two of the six presentations were worth noting.
Cono Ariti from The Nuffield Trust spoke about predictive risk modelling in health care. He mentioned the “Kaiser pyramid”, which is the old 20/80 rule, slightly expanded, saying that 3% of patients make up 45% of health care costs. The next 13% are responsible for another 33%; added up, these are approximately 20/80!
And he made two important points to keep in mind with health analytics. First, just building a model is useless without corresponding interventions in place. In other words, if you identify patient segments, say, you also need to have suitable treatments available for them. And secondly, that regression to the mean is a major issue in this area: many people get better by themselves, without any treatment at all. This will complicate evaluations between treatments (and no-treatments), since a large number of patients in all groups, whether treatment or control, may improve significantly. And any differences between control and treatment groups may be very small and difficult to identify.
The second interesting talk was more of a blue sky horizon scan, from Rob Smith at IBM. He talked about the future of health analytics, noting the differences in people of different ages when it comes to tech, gadgets, and privacy, and consequent health behaviours. He also talked a bit about the data issues around genomics, and more about what IBM is doing with Watson. For example, it gets fed as much medical literature as possible, so that it can propose not only treatments to match symptoms, but also suggest new research avenues. Very impressive stuff, and potentially useful in things like cancer treatment which is getting very complex. So much so in fact, that my conclusion was to ask whether artificial intelligence is now the only thing clever enough to handle modern medicine?