Alex McNeil is the Maxwell professor of mathematics at Heriot-Watt University, Edinburgh. He was formerly assistant professor at ETH Zurich. Professor McNeil has a BSc from Imperial College, London, and a PhD in mathematical statistics from Cambridge University. He has published papers in leading statistics, econometrics, finance, and insurance mathematics journals and is a regular speaker at international risk management conferences. He is joint author of the book Quantitative Risk Management: Concepts, Techniques and Tools (2005). Professor McNeil is the director of the Scottish Financial Risk Academy, which aims to provide financial service companies with an efficient, streamlined framework for exploring complex issues of risk and related topics, in partnership with academia.
In 2010 you were instrumental in the setting up of the Scottish Financial Risk Academy, a forum for academics with expertise in risk management, to work with the finance sector to enhance the understanding of various forms of financial risk. How has that progressed?
In September 2011 the Scottish Financial Risk Academy held its third biannual colloquium, where we had speakers from academia, industry, and regulatory bodies. The focus of the colloquium was Solvency II, the new regulatory standard for the insurance industry. The implementation date for Solvency II is 2013, and institutions across the sector are currently involved in trying to understand both how Solvency II will affect them specifically, and in looking at a number of outstanding issues with Solvency II that have yet to be resolved. What came out of the colloquium was a clear sense that there are still one or two ongoing challenges with Solvency II. We looked at the outstanding technical issues and how they might be best addressed.
In particular there are a lot of questions about MCV, or market-consistent valuation. There are well-known concerns from the regulator on the procyclical effects of marking to current value, where you mark up the value of your liabilities when interest rates are low and mark down the value of your assets in falling markets. The net effect of this when you have a situation such as we saw in October 2011, when falling markets were accompanied by continuing ultra-low interest rate regimes, is of course to make the solvency position of any institution look considerably worse than it might turn out to be. The risk is that the view provided by marking-to-market may prompt dramatic interventions on issues that may well not require action. One result could be that a regulator could end up calling for capital ratios to be enhanced at exactly the time when this would hurt firms the most.
Does the fact that the life sector, for example, writes extremely long-term contracts pose particular challenges?
One of the characteristics of the insurance industry is that organizations have long-term liabilities going into the future, and how they value these liabilities can have a really material impact on views of the solvency of those organizations. No one is arguing that the industry should move away from MCV, but it is recognized that certain kinds of liability are very hard to replicate or hedge, particularly when the maturity date for the liability could be like 30 to 40 years in the future. It is clear that mathematical techniques have a very important contribution to make here.
Another debate in the industry is how to account for liquidity in the valuation of liabilities. There is still some debate as to whether liabilities can in fact be valued in a way that respects the fact that insurance companies will back their liabilities with the cheapest assets possible. They can pursue buy-and-hold strategies with less liquid assets, such as off-the-run securities rather than on-the-run, and they can use the asset cash flows to cover liability outgoings. Because these assets are cheaper for them to obtain, they want to have this taken into account in the valuation of long-dated liabilities. So the question arises as to what value to place on liquidity as an end in itself. There is a tremendous amount of literature on the theme of liquidity premiums.
In essence, the liquidity premium is the difference between the prices of a liquid and an illiquid asset, assuming both to be equivalent. The more liquid asset is easier to deal in and has correspondingly lower dealing costs. It follows from this is that liquid assets trade at a premium over more illiquid assets, and that premium, arguably, needs to be taken into account when matching assets against future liabilities. This is just one complexity among many when computing the solvency position of an insurance company.
Are you satisfied with the approach taken by Solvency II?
Solvency II is all about computing the organization’s capital and showing how the company’s balance sheet can change over a one-year horizon. To do this, you project the economy forward one year, and within your projections of the economy a whole distribution of scenarios arise as to how fundamental risk factors are likely to impact the organization’s solvency. These risk factors include the yield curve for government bonds, the way credit spreads might develop through the year, and anything else that can have a material bearing on assets and liabilities.
Projecting these outcomes one year forward in time allows you to revalue the assets and liabilities that make up your balance sheet. Because there is no simple valuation formula for many balance sheet positions, this means that you have to mark-to-model, based on your model assumptions, in a risk-neutral way. To do this, the most popular method today is to use Monte Carlo valuation techniques, which involves averaging pay-offs and cash flows over many risk-neutral scenarios. The projection of values then becomes a problem in nested simulations, with each of the outer simulation points representing a scenario for the way in which the world might develop over one year and each of the inner simulations relating to a risk-neutral projection into the more distant future. Moreover, each of these scenarios is inherently multivariate: at the same time as you project forward a possible scenario for the different values that might comprise the yield curve one year on, you will also be looking at dependencies and relationships with stock market valuations and credit risk projections, and so on. This is a very complex, iterative mathematical calculation exercise with many moving parts, as it were.
The ideas that lie behind the famous Black–Scholes formula for valuing derivatives lie behind much of the way the industry approaches this problem. However, with complex insurance products—such as unit-linked products where payouts are linked to stock market returns—there is no simple formula that you just plug the numbers into, in the way that options traders, for example, use Black–Scholes to generate values for call and put options. We are dealing with products like variable annuities, and you have to compute the discounted payouts by simulation. There is no way of avoiding the fact that in projecting values one has to grapple with the problem of nested simulations.
Nested simulations can become very complex very quickly. How tractable is this for modeling the risks facing the insurance sector?
This is a problem that is particularly pertinent to the insurance sector but is not unique to the sector. It also occurs in banking where you have to project the value of derivative contracts that include, for example, structured credit instruments such as credit default obligations (CDOs). Some people argue that it is a problem that will go away as computers get more powerful. The argument is that all that is required is sufficient computing power to run, say, 10,000 nested simulations, which gives you 100 million potential outcomes.
However, at this point in time finding workable mathematical workarounds to do brute-force calculations is both attractive and logically plausible. The brute-force approach means tying up computer resources for long runs, whereas using, say, the least-squares Monte Carlo approach is a neat alternative that delivers acceptable approximations to what you would get with brute force.
Remember that the primary goal of Solvency II is to demonstrate that 99.5% of the time the organization is going to remain solvent over a one-year time frame, given a wide range of potential impacts and scenarios. Solvency II takes as its base criterion the 200-year event—in other words the organization should be able to survive the kind of adverse alignment of negative forces that is likely to occur only once every 200 years. This is equivalent to the capital standards that are used in banking and is the standard of prudence that European regulators feel is right for the insurance sector. Taking this as their guideline, the requirement then is to compute the relevant capital buffer that will enable the organization’s balance sheet to be resilient to this once-in-200-years event through the course of a one-year period. The probability of the organization going insolvent, in other words, should be less than 0.5%.
This is the famous Pillar One in Solvency II, which has a three-pillar framework. Does Pillar One, the internal model, all come down to mathematics? And if it does, what is tractable and what is not tractable?
Recreating complex reality exactly is not tractable, not with our current state of technology and knowledge. So then the question is, which approaches to modeling reality, as far as insurance companies are concerned, are both tractable and practical, and at the same time sound enough for the industry to found a regulatory framework on their use?
Incidentally, the other two pillars in Solvency II are the risk culture in the firm, i.e. does it have a good risk governance structure, and the third pillar is disclosure and transparency. You should be able to look up an organization’s risk numbers and see how much risk they bear and how they measure that risk.
Solvency II gives insurance companies two options. They can use the standard formula approach to calculate their capital requirements. If they do this, they don’t have to do too much modeling and can just apply the formula mandated by their regulator. This formula looks at 200-year stress events for various factors, from equity market events to credit risk events, natural catastrophes, and life underwriting events—where an event would be, for example, a sudden dramatic lengthening of life expectancies, which would create much larger liabilities.
Under the standard formula approach you work out the impact of all of these and then aggregate the results using a formula. However, many companies feel that this approach overstates their capital requirements quite drastically. They prefer the second approach, which permits companies to develop an internal model. A few countries seem very keen on mandating the standard model. The United Kingdom is content to allow companies to develop internal models which are then approved by the regulator.
How does Solvency II relate to past efforts to regulate the sector?
Basel I, the initial regulatory standard for the financial services sector, was very standard formula-based. Basel II allowed some internal modeling, but was quite limited in its approach. It allowed companies to have an internal “value-at-risk” model that analyzed the organization’s market risks and an internal model for the losses that might occur given defaults in its lending portfolios. What you could not have was a model that showed how these various factors might interact with each other.
Solvency II takes a different approach. It lays down principles, but it does not mandate the models to be used. This freedom to explore interrelationships in important variables, or what may be important variables, is crucial. The global financial crisis of 2008 taught us that even quite liquid markets can become illiquid and lose their depth under certain conditions. So Solvency II allows room for interpretation. At the Scottish Financial Risk Academy’s latest colloquium we had some 100 representatives from some of the largest companies in the insurance industry, and the general impression of Solvency II so far was quite favorable.
However, as I said, there are still challenges to be resolved, not least of which is defining a reliable price for assets and liabilities. Undoubtedly the best approach to valuing liabilities is to look at what you would have to pay a third party to come in and take over those liabilities. Valuation according to transfer value is therefore one of the ideas in Solvency II, but that still leaves open the question of how best to compute this value. If I offered to take over your liabilities for $200 billion, you have to have some way of knowing whether this represents a just price, or an unreasonably exaggerated price. The basic principle is that the transfer value would be composed of the price of the assets that are required to replicate the liability cash flows as well as a risk margin required by the party taking on the liabilities. But of course perfect replication of liabilities is not possible, and the risk margin is also a difficult quantity to determine.
A lot of people are working hard to understand the technical, mathematical details that underlie the models that Solvency II requires. It is a given that there is quite a technical core to Solvency II, irrespective of whether you go down the standard formula approach or the internal model approach.
Do you feel that the regulators proposing Solvency II have a good grip on the challenges involved?
What is clear so far is that the regulators behind Solvency II are very knowledgeable. They have the support of a number of actuaries, and my impression as a mathematician is that they are well up to speed with the technical issues.
What we are seeking to do at the colloquia held by the Risk Academy is to fill a need for technical presentations that address all the components of solvency capital modeling, and these presentations are of benefit to both regulators and the industry. Of course, there is an unavoidable gap between the technical core of the internal models and the second pillar, which goes to the way in which the board of the organization implements the risk governance regime in the organization. One of the things that went so disastrously wrong in the financial crash, as we can now see clearly, is that the boards of major banks did not have a grasp of how badly their organizations were mispricing and undervaluing the risks they were assuming.
How the information derived in the internal models in Solvency II is passed to the board, and how this information then informs the risk governance profile of the organization, is an extremely interesting, but as yet untested, part of the regulatory mechanism. I have some confidence that major insurance companies are already having the right kinds of conversations about this issue. The role of the chief risk officer will be pretty crucial here, and we had an excellent presentation at the colloquium from Tom Wilson, the chief risk officer at Allianz, and interesting insights during the panel discussion from Raj Singh, the founder of the Chief Risk Officer forum.
There is no doubt that as we progress with the implementation of Solvency II across the industry, our understanding of risk and valuation will be considerably enhanced. Whether this will enable the sector to navigate successfully a real 200-year event when it happens remains to be seen. That will be the real test of the effectiveness of Solvency II and all the good work that has been put into it.