Thursday, 28 May 2015

IRM on Internal Model Validation - Red Card or Green Card?

Cyclic Validation - Quelle horreur...
Just back from Paris, where I spent a weekend queueing behind selfie-taking tourists before taking out a second mortgage to buy bottled water. A beautiful place, though I found Depardieu was much quieter off-screen...

Onto the topic in hand, the PRA were pretty vicious back in the day on Validation efforts in their infancy, with Julian Adams lambasting both progress ("significantly behind") and validation scope ("narrow"). Given that the Solvency II sabbatical which bridged half of 2012 and all of 2013 gave firms time to catch up and widen, you might think that those with internal model ambitions would be pretty tidy by now. The PRA have even told firms how they believe "good" model application paperwork to look, carving out for themselves and the Validators of the world an easy-to-read "model reviewer" level of detail (p1).

In those salad days, Internal Model Validation felt to me like it would be the chernozem of the nascent Risk Management profession in insurers; a skill set that a quant or a non-quant could acquire, apply, and ultimately ease through the promotional path within insurance entities, given the depth and breadth of technical and strategic information the process challenges...

...but the moves never came. Despite the actuarial world themselves happily disassembling the complexities of quantitative modelling into easy-to-digest IM Validation themes, the non-quant world has waited patiently to see if anything of substance would emerge from one of its representative bodies.

And this week it arrived! The Institute of Risk Management has delivered, as part of its Internal Model Industry Forum (IMIF), a white paper on the validation cycle.

The IRM have been active in this area prior to the formation of the IMIF. I have covered an ERM in Insurance event at the start of 2014 here, while this more volumous slide pack featuring a number of the Billy and Betty Big Biscuits of the field emerged from summer of last year, when the IMIF seemed to come to fruition. This white paper itself appears to move along the concepts and ideas inside an IRM slide deck from last Christmas.

Given that the IRM is not-for-profit, there is always a likelihood that sponsors will unduly influence the products (indeed the IRM Chair notes in this that they rely on "enlightened industry support" to knock these documents out).

Sadly in this case, the sponsors include Three of the "Big 4" (with the fourth on the IMIF steering committee) , leaving the document dripping with consultancy hallmarks rather than pragmatic solutions to execute the tasks in hand.

That view is reinforced somewhat by this follow-on presentation to the IMIF from last week by this white paper's workstream lead and supporting consultant - one selected industry comment on slide 8 (presumably from a chocolate bar shortly before it ate itself) reads, "validators should really be experienced modellers"!

A few general points jump out of the white paper;
  • That a firm's IM is "...at the heart of risk and capital evaluation" - I thought it was supposed to "inform" this evaluation, not dominate it (slide 3 here, as well as Julian Adams's speech from a couple of years ago [p4]).
  • Is the insurance industry "...increasingly reliant on sophisticated models" - maybe in terms of AUM/Market Cap, but given the UK IMAP queue is down to approximately 40 firms out of over 400 (p4), and that number has steadily reduced over the last 3 years, feels a touch disingenuous. I've no doubt the firms represented on the Steering Group are "...increasingly reliant" though
  • The document claims to set out "best practice principles" - not sure if "practice" and "principle" share the same bed, but that aside, would anyone find it remotely acceptable to have the consultancy world fund a document which details "best practice" on IM Validation?

And a few stand out elements from the proposed Validation Cycle, which is heavily influenced by EIOPA's guidelines:
  • "Best practice now requires firms to demonstrate, with evidence, that the cycle...[is] being actively and effectively carried out" - how can best practice "require" anything from anyone?
  • "...resulting best practice that is emerging" (p4)  - how is any practice considered "best" at this stage of proceedings, when we are literally practising! Against what criteria?
  • References to "model risk impact assessment" and the "model risk assessment process" (p5) seem to come from nowhere. Alluding to something formal, but not very clear
  • Lot of coverage of "triggers" of IM Validation, which feels like a fishing expedition for the paper sponsors, rather than direct address of L2 Art 241 - the number of areas of "change" to consider as IM Validation triggers covers pretty much any change, anywhere, both inside and outside of an insurer (p8)! Most would also be ad-hoc ORSA triggers in my experience, so this potentially sets up insurers for a bucketload of work every time they hear a pin drop.
  • Formulaic and periodic IM Validation a "needless cost"? Surely periodic validation, no matter how badly executed, is compulsory (L1 Art 125)?
  • The Trigger Impact Assessment stage (p10) is barely legible - "The trigger impact assessment against model risk appetite stage" - and terminologically it is all well above legislative requirements.
  • "Unexpected triggers" (p12) get a mention. Again, not making sense to me - you either know your triggers or not.
  • "Model validation is complex" and "less than black and white" (p16) - certainly is if you try and follow this process! A focus on plain questions and less quant can only help the models non-expert users (slide 7).
  • If the validation cycle, processes and execution are "continuously evolving" (p18), are they reliable? Feels difficult to meet L2 Art 241.3, at least from a planning and execution perspective, if the process is constantly being tinkered with 
  • "Developing a communications strategy" (p20) as part of the validation scoping and planning stage feels terribly over-elaborate.
  • "Robust planning" expected to be common (p22), which doesn't necessarily marry up with the expectation of dynamic rather than cyclic validation in future (p10)
I think it is right to take the hump to a certain extent here. The PRA have been cunningly silent on capital add-ons to date, but given the implication that they will not be applied and renewed ICG-style (slide 13), there is likely to be many more less monied Partial IM applicants to follow over the next couple of years. Having the most influential consultancy firms decide on what is "best" in the validation world (and for it to have this many bells, whistles and legislative off-roads) feels like setting those firms up for either a fall, or another bill.

The PRA actually delivered something with much less padding to the IRM back at the end of 2013, so I'm struggling to see why that has justifiably been turbo-charged. Given they have three of their finest involved with the IMIF, but are continuing to be directly vocal on this topic (as recently as March 2015), it sends a worrying message to the capital add-on brigade that the IMAP early birds will be setting disproportionately high bars for 2017 and beyond when they deliver their PIMs.

Ultimately, I was disappointed by the publication, which reads more like a flannel manual, and is certainly not the kind of Risk Profession contribution that the topic so badly needs if the PRA's dreams of Board's "directing" and "owning" the IM valdiation process (slide 9) are ever going to come true. The 200 page novella world of Validation Reporting feels closer than ever...

No comments:

Post a Comment