Friday, 17 May 2013

Internal Model Validation - the "desire for certainty"

Some useful snippets on the links here for anyone in the model validation space, whether if be the practical applications of Monte-Carlo simulation outside of the insurance industry, actuarial perspectives on model risks themselves such as parameter uncertainty and goodness of fit testing where "the problem is more often too many candidate distributions" as opposed to restrictions in choice.

This fantastic blog post from one of Willis's finest is about as blunt a critique of actuarial modelling activity and its potential for subsequent misuse as I have read, and I would strongly recommend it on to non-expert risk practitioners who may one day find themselves in the model validation/use test firing line. A few of the pearls of wisdom offered (focused on reinsurance industry, but relevant to all) include;
  • How the human "want to believe" and "desire for certainty" can lead to models making rather than guiding decisions
  • Reliance of models on "large numbers of heroic assumptions"
  • "Data is always limited and flawed"
  • That "models take combinations of assumptions and torture them to come to conclusions"
  • The revisiting of assumptions only when the answers don't fit expectations ("euphemistically called 'calibration'", hilarious!)
  • That using models for setting regulatory capital, rather than just informing decision making, has led to "extremely onerous" IMAP activity i.e. the limitations noted above are so well established that the regulators cannot ignore them at a granular level.

Then there this piece from Deloitte US on model validation, or more specifically, research into the quality of existing actuarial modelling controls, is an eye-opener for anyone working in the validation space. With RMORSA and associated capital modelling firmly on the agenda Stateside, it is interesting to watch how aggressively they approach validation, bearing in mind this work was commissioned by the Society of Actuaries, whose members may ultimately be charged with applying some of these recommendations!

This research in particular assesses current state versus best practice controls over the assumptions, inputs and outputs of actuarial models, and though the sample of respondents to the survey is relatively small (representing "30 unique companies"), the absence of suitable supporting documentation around model governance so evident in the UK's IMAP process appears to be a depressingly constant theme. This report at least includes recommendations as to how the US actuarial profession may bridge some of the gaps Deloitte identify.

In the NAIC's ORSA Manual, they ask that "ORSA Summary Report should provide a general description of the insurer’s process for model validation, including factors considered and model calibration" (p7), which I guess is what one expects to see in the EU (i.e. validation being a sub-process of the ORSA, which can be summarised in the ORSA reports). That said, the breadth of validation work performed over there will surely be driven by S&P expectations communicated in ERM Level III reviews, rather than profession-sponsored consultancy recommendations!


Finally (and slightly off track), an odd piece from Towers Watson on validating ORSAs, pitched to a room full of Internal Auditors. Would be unfair to say there aren't some salient points throughout, but given that there is "no clear requirement" to validate ORSAs (there was something on the matter in the original CEIOPS ORSA pre-consultation, but it was dropped in the public consultation and the final advice), then you would think it could be covered in less than 30+ slides!

As it happens, the TW slide pack for internal model validation appears to have been raided and had the acronym 'ORSA' jemmied into the text for much of the second half of it.

No comments:

Post a Comment