Showing posts with label calibration. Show all posts
Showing posts with label calibration. Show all posts

Wednesday, 26 March 2014

Solvency II Delegated Acts available online (kind of...), plus EIOPA's plans for 2014/15

So let's start with something a bit unexpected - DRAFT DELEGATED ACTS! ONLINE!

I'm not sure who the leaky uploader is (appears to be a Spanish consultancy firm), but the document is very much online. Sadly, it is only the January 2014 version, which as you will see in the rest of the post, has just been superseded, but it definitely pairs up with the version currently doing the rounds, I promise!

I have managed to get a sneak preview of the latest version of this document (dated 14th March) which have seemingly managed to burst the banks of the tightly-knit circle of advisors, and are now no doubt winging their way to a Solvency II Programme Director near you! There are "tracked changes" on the March document now circulating, which only appears to cover changes since the emergence of the January document hyperlinked above.

Lord help anyone who wants to trace it back to the more familiar 2011 (unpublished) draft, you might as well draw a load of foxheads on sticks...

Insurance Europe were obviously part of the privileged few for the March revisions, hence they fired out this missive last week regarding all of the Pillar 1 technical areas which they feel (on behalf of the industry) remain deficient. There are no real surprises in their list - it is the same topics which have been on the whinge-list since EIOPA's LTGA last year, and indeed earlier in the case of the Currency Risk approach and Own Funds classification.

Following on from the draft Delegated Acts being made more widely available, there has been a reasonable amount of noise in the paid-for press (here, here and here for subscribers), as well as Insurance Europe's top man having a lobbying call published in the FT (here).

Being more of a Pillar 2 man myself, I thought I would check to see what, if anything, had been tweaked in my areas of interest. The impression given earlier this year was that little had changed outside of the Long-Term Guarantee elements, and that was certainly true if you compared the November 2011 and January 2014 documents.

However, having examined the amendments in the March 2014 version, I have found is that a few areas of governance (both SOG and Internal Model governance) which were previously untouched have actually received a fair bit of treatment, for example;
  • Changes to the requirements for internal audit function holders not to cover multiple control functions (this constraint has been removed). This is presumably to pacify the smaller firms across Europe who have a Risk/Compliance/Internal Audit multi-tasker, so textbook "three lines of defence" have taken a bath in the interests of proportionality.
  • The devil remains in the detail though, as the amended text allows someone to "carry out" more than the IA function, but seems to stop shy of them "taking responsibility" for other functions. Not sure how that will work in practice.
  • Changes in the IM Validation space, in particular the removal of the requirement for a "Validation Policy". Fair to say most firms in IMAP would have produced one of these at least a year ago now (plenty of industry references here, here, here (p8) and here for example!), and while still a document of merit, does a "validation policy" now constitute gold-plating?
  • Changes in the required Internal Model Documentation, targeting a much slimmer set of compulsory documents. This includes replacing a number of "policies" with "descriptions of...", which will no doubt be well received by those supervisors with multiple internal models to assess over the next 18 months!
  • The tiered timescales for submitting QRTs, SFCRs and RSRs have now moved into the Directive, via Omnibus II text (as opposed to haveing been deleted, which is what it looks like at first glance!)
  • A few of the other TSIM articles (Tests and Standards for Internal Model Approval) have been enhanced. "At least quarterly..." assessment of the IMs coverage of material risks is now specified, for example. Quite how the hard-coding of the regularity cramps your actuaries' style is another thing! 
I strongly suggest you all get back to work and check for yourselves!

Tuesday, 1 October 2013

Pre-application for Internal Models - EIOPA's FINAL preparatory guidance for national supervisors

So the March consultation document for internal model pre-application brought a few eye-openers for those countries partaking in a less onerous application process than that favoured by the UK, with the detail in it suggesting that the UK very much had the whip hand in its drafting.

On the basis that there were still areas which even the most hardened IMAP-veteran may have winced at, it was interesting to see if anything got dropped in the lobbying stampede. On that basis, the final guidance for internal model pre-application covers the following in the preamble;

  • 3.10 - That it is not in the NCAs gift to conduct pre-application preparation along the lines of provisional approval or to provide "roadmaps" to compliance (which may explain the PRA's caginess with the industry). It is purely about a firm's preparedness and suitability to submit an application
  • 3.33 - On request, EIOPA have introduced a compulsion for NCAs to provide "regular feedback" to firms
  • 3.35 - Confirms that not all model changes need to be reported to NCAs during pre-application, just those considered "relevant" by firms themselves
It is fair to say that the lobbying in this space has been noticeably more successful than for ORSA or System of Governance, no doubt due to the smaller sub-set of affected stakeholders having a more concentrated relevance. That said, there were still a number of rebuffs from EIOPA, particularly where the lobbying looked more like whinging about paperwork volumes! Highlights below;

Model Change Policy
3.38 - No danger of EIOPA supporting the recommendation to "fast track" model change approvals if a "major change" is required at short notice. They instead recommend "proactivity" with NCAs. Not sure what this does for the world of opportunistic acquisitions though
3.40 - Fudged the question as to whether parameter changes are considered "major", offering an answer of 'it depends' which, for me at least, leans more towards 'yes they are'.

Use Test
3.42 - Confirms that evidencing "use" is not compelling use of model outputs over and above other techniques

Assumptions and Expert Judgement
3.46 - Documentation and validation of assumption setting and expert judgements considered "crucial" in order for undertakings to counter the lack of data and subjectivity in those processes
3.47 - A guideline has also been amended to confirm that the materiality principle applies for this topic
3.48 - Only the most material assumptions will need AMSB sign-off

Methodological Consistency/PDF/Calibration
A number of changes made to clarify guidance in these areas

Profit and Loss Attribution
3.64 - No escaping the requirements to produce P&L attribution granularity at Legal Entity level, as well as by risk driver

Validation
3.68 - EIOPA do not accept that those who build models may also validate them

Documentation
3.70 - That the guidance around the documentation of the internal model should provide "...[protection] from key-person risk", which I have never seen offered as justification from the supervisory end before

Ultimately, there have been no huge concessions from the position in March, which one would think will cause a number of the more liberal EU regulators to give serious consideration to "explaining" rather than "complying" - that said, with this having been written in Union Jack ink, my British cousins should simply get their transition planning updated accordingly.

Friday, 17 May 2013

Internal Model Validation - the "desire for certainty"

Some useful snippets on the links here for anyone in the model validation space, whether if be the practical applications of Monte-Carlo simulation outside of the insurance industry, actuarial perspectives on model risks themselves such as parameter uncertainty and goodness of fit testing where "the problem is more often too many candidate distributions" as opposed to restrictions in choice.

This fantastic blog post from one of Willis's finest is about as blunt a critique of actuarial modelling activity and its potential for subsequent misuse as I have read, and I would strongly recommend it on to non-expert risk practitioners who may one day find themselves in the model validation/use test firing line. A few of the pearls of wisdom offered (focused on reinsurance industry, but relevant to all) include;
  • How the human "want to believe" and "desire for certainty" can lead to models making rather than guiding decisions
  • Reliance of models on "large numbers of heroic assumptions"
  • "Data is always limited and flawed"
  • That "models take combinations of assumptions and torture them to come to conclusions"
  • The revisiting of assumptions only when the answers don't fit expectations ("euphemistically called 'calibration'", hilarious!)
  • That using models for setting regulatory capital, rather than just informing decision making, has led to "extremely onerous" IMAP activity i.e. the limitations noted above are so well established that the regulators cannot ignore them at a granular level.

Then there this piece from Deloitte US on model validation, or more specifically, research into the quality of existing actuarial modelling controls, is an eye-opener for anyone working in the validation space. With RMORSA and associated capital modelling firmly on the agenda Stateside, it is interesting to watch how aggressively they approach validation, bearing in mind this work was commissioned by the Society of Actuaries, whose members may ultimately be charged with applying some of these recommendations!

This research in particular assesses current state versus best practice controls over the assumptions, inputs and outputs of actuarial models, and though the sample of respondents to the survey is relatively small (representing "30 unique companies"), the absence of suitable supporting documentation around model governance so evident in the UK's IMAP process appears to be a depressingly constant theme. This report at least includes recommendations as to how the US actuarial profession may bridge some of the gaps Deloitte identify.

In the NAIC's ORSA Manual, they ask that "ORSA Summary Report should provide a general description of the insurer’s process for model validation, including factors considered and model calibration" (p7), which I guess is what one expects to see in the EU (i.e. validation being a sub-process of the ORSA, which can be summarised in the ORSA reports). That said, the breadth of validation work performed over there will surely be driven by S&P expectations communicated in ERM Level III reviews, rather than profession-sponsored consultancy recommendations!


Finally (and slightly off track), an odd piece from Towers Watson on validating ORSAs, pitched to a room full of Internal Auditors. Would be unfair to say there aren't some salient points throughout, but given that there is "no clear requirement" to validate ORSAs (there was something on the matter in the original CEIOPS ORSA pre-consultation, but it was dropped in the public consultation and the final advice), then you would think it could be covered in less than 30+ slides!

As it happens, the TW slide pack for internal model validation appears to have been raided and had the acronym 'ORSA' jemmied into the text for much of the second half of it.