|Models - seeking validation|
The Institute of Risk Management's rebadged Solvency II Special Interest Group held a get-together in December on IM Validation, and a couple of interesting documents have emerged.
One from a PRA Validation guru is particularly useful for anyone in the IMAP space who has any uncertainty about the PRA's approach to assessing the quality of one's validation processes and reports, covering;
- The purpose of validation
- The PRA's approach - Life and GI-specific SME groups
- An overview of IMAP findings - 40+ reviews conducted to date, and they also state what your evidence "should" demonstrate
- Their observations - Noting that Validation Reports are generally deficient
- A schematic view of what they consider the Validation process to be
- A bullet point list of how validation effectiveness can be demonstrated
The other useful benchmarking item from this event is the survey on participants' experiences to date in the validation field. While the sample is small in absolute terms at 18, and is a touch heavy on the GI side (over half of respondents), this is as good a benchmarking aid as you will see for a while, so it is worth noting the following;
- Just over half have transitioned Validation into BAU
- Only 3 respondents had a Validation-specific steering committee to help govern the process, with others choosing to use existing committees or the CRO/Risk function
- Over 80% use the SCR contribution of each risk driver to determine the depth of validation activity. Other determinants include regulator feedback, risk registers and previosu validation reports
- Over 40% say that their independent validation work is identifying flaws in their "dependent" validation work, while over a quarter say that independent validation has been scaled back due to the quality of "dependent" work!
- Less than a quarter say that validation of external models (ESGs etc) has been effective
- Around two-thirds use peer review and sensitivity analysis to validate expert judgements. Horrifically, two respondents said they haven't been able to validate expert judgement at all!
- Over half are still using external contractors/consultants for independent validation
- Page 12 covers the popularity of certain sections of a Validation Report. Less than half include a section on benchmarking
- Most are keeping the Validation Report to under 100 pages, with management feedback being the main catalyst for changing the length.
The findings appear to be positive on the whole, with most firms saying they are at least halfway towards their "ideal process". Unsurprisingly, dependency modelling and validating expert judgement make the list of "key challenges" remaining.