What Place Does Machine Learning Have in Mortality Modelling?
Human mortality is not a deterministic process (unless you happen to be living in a sci-fi dystopia where remaining lifetime is used as a currency). As such, the estimate of future mortality rates is one of the central assumptions in many pieces of advice that actuaries produce for their clients. An enormous amount of effort is put into incorporating all the latest data into models of human life expectancy and predicting how it will evolve over time as we project into the future.
Although we do have a general understanding of the factors that cause people to live longer or shorter lives, there always appears to be a random element. With this random element comes longevity risk — something of particular concern for organisations whose cash flows are dependent on how long people live. For example, a pension scheme needs to have an idea of how long its members are going to live so it can manage its liabilities effectively and to ensure that it’s making the right investment decisions. Similarly, a life insurer needs to predict policyholder mortality with reasonable accuracy so that it can set an appropriate level of premiums — high enough to make some profit, but still low enough to provide value to customers and to remain competitive in the life assurance marketplace.
Currently, the process for setting a mortality assumption may look something like this:
Select a base table — an admittedly rather depressing list of the probabilities of an individual dying over each year of age;
choose a mortality projection — which will determine how the mortality rates at each age will evolve over time; and finally
perform additional analysis to tailor the chosen table and projection to match the particular demographic profile of the population of interest.
These base tables and projections are being updated on a frequent basis (for example, the Continuous Mortality Investigation updates its projections annually). Further, new parameters are being added to the model to render it more flexible for assumption setters (for example, check out this briefing note on CMI_2018, published in March 2019, which sets out the introduction of a new “initial addition to mortality improvements” parameter). The fact that we are constantly revising our mortality models and estimates should be enough to demonstrate that forecasting how long people are going to live from historic data is actually quite hard. There’s a seemingly uncountable number of variables related to lifestyle, socio-economic situation, and even genetics. What should we be doing to best capture the most significant factors that drive an individual’s life expectancy — and its evolution over time — in a way that allows us to generalise to larger populations?
Where Machine Learning Comes In
A data-driven machine learning approach would seem like a good fit for this kind of problem. Life expectancy is a well-studied phenomenon and a great deal of data has been collected to facilitate our understanding of the “rules” of human mortality. Machine learning has given rise to a number of unsupervised algorithms that have been specifically designed to do things like identify similar data points (clustering) and find patterns and correlations in data (which could be said to fall into the domain of association rule learning). One would be forgiven for thinking that demographers would be all over this new, emerging tool and using it in all kinds of creative ways to further our understanding of the complex relationships which govern life expectancy.
In reality though, machine learning applications within the study of the changing structure of populations has been somewhat limited so far. In their 2019 paper, Levantesi and Pizzorusso propose that this lack of popularity is due to the fact machine learning models are often seen as “black boxes” whose results are difficult to explain and interpret. Admittedly, this a valid concern — and there is plenty of ongoing work and discussion within the machine learning community concerning the importance of explaining how and why an AI algorithm arrived at the conclusion it settled on, as well as research into practical methods for demonstrating to stakeholders that a model is making sensible and reasonable decisions.
Nonetheless, the researchers go on to give a summary of the contributions that have been made thus far to the field of mortality modelling from those making use of machine learning approaches:
- Assessing and improving the goodness-of-fit of the estimates produced by standard stochastic mortality models (Deprez et al, 2017); and
- applying neural networks to identify significant factors in forecasting mortality and to extend standard mortality models (Hainaut, 2018 and Richman and Wüthrich, 2018).
Levantesi and Pizzorusso then go on to demonstrate in that same paper that they were able to capture patterns not identifiable within standard mortality models through the introduction of an “ML estimator” parameter identified by their algorithm as having predictive power. In doing so, they were able to arrive at improved forecasting quality when the outputs of their machine learning model were used to support standard mortality models.
You will notice that a key theme in the above papers is the use of machine learning to support and not necessarily replace traditional mortality modelling approaches. This strikes me as a sensible approach to take going forward. Domain experts from both machine learning and demography need to be able to communicate and work together if we want to go further in our understanding of the drivers of life expectancy.
Traditional and machine learning approaches have been famously at odds in the field of natural language processing (NLP), in which theory-driven and rules-based approaches were initially all but thrown out in favour of data-driven ones — so much so that it led to the famous quote from IBM researcher Frederick Jelinek: “Every time I fire a linguist, the performance of the speech recognizer goes up.” But even within the NLP community, there are signs that the pendulum could be swinging back the other way and there are questions being asked about whether we would be making more progress if more linguists were involved in NLP research (relevant TWiML talk).
It doesn’t take much imagination to see the parallels between NLP research and demography/mortality modelling: each field studies some kind of system in which theory- and rules-based approaches yield (apparently) reasonable and useful results — although there is also a great deal of data from which we are able to build perfectly adequate models without having to have such a deep level of domain expertise.
I would propose that it is best to take a balanced view — there’s no use getting overly sentimental about hand-designed rules-based approaches if they keep getting outperformed by alternative methods (such as those provided by machine learning), but we should remain open-minded to the fact that we will always need domain experts to inform the ways in which we design and operate our models.
Conclusion
Machine learning and artificial intelligence have their place in identifying and forecasting mortality trends. It’s not up for dispute that these techniques are a powerful tool for detecting patterns and associations in and between data points and that there is value to be added in incorporating this kind of analysis to the existing, more classical approaches to mortality modelling. Inevitably, the optimum outcomes will only arise when balance is found between the more classical, rules-based approaches and their more contemporary, data-driven machine learning counterparts.
Mortality modelling will always be relevant — unless a) you happen to have discovered the fountain of youth, b) you’ve figured how to upload your consciousness into the cloud for all eternity, or c) you live in the aforementioned sci-fi dystopia (although this is potentially a fair trade if it also means you get to be Justin Timberlake). Now, our world is not a dystopia, but we have been faced with emerging trends in life expectancy that even experts didn’t see coming and we are also facing a host of new challenges as we come to terms with the consequences of our ageing populations.
We may well need advancements in mortality modelling to successfully manage the problems we will certainly face as a result of our changing demographic structures — and we’re only going to be able to do it if our experts are able to put aside their academic loyalties and work together for the good of everyone. So yes, we must be bold — we must adapt and explore new technologies — but we will quickly find ourselves lost if we don’t remember where we came from.
More info and credits
Andrew Hetherington is an actuary and data enthusiast working in London, UK. All views are Andrew’s own and not those of his employer. Connect with him on LinkedIn.
Paper discussed: Levantesi, S.; Pizzorusso, V. Application of Machine Learning to Mortality Modeling and Forecasting. Risks 2019, 7, 26.
Hourglass photo by Aron Visuals. Alarm clock photo by Icons8 Team. Both on Unsplash.