On first read, this feels massively influenced by the UK's activities to date, and indeed anyone working in that space will recognise FSA pawprints all over the granular details contained within. This is fair I suppose - the InsuranceERM models map has the UK down for around 1/3rd of models currently in 'pre-application' across the continent.
Therefore bearing in mind the UK approach is already pretty well established, I have highlighted below areas which either diverge from what is currently being exercised on the ground, or which clarify (at least for me!) areas which were previously ripe for controversy or disproportionate/inconsistent application. Where it is common sense or continuez tout doit, I have ignored it.
Most importantly, I am reading it as a fait accompli - bearing in mind the short window of time between consultation end and period commencement, EIOPA's recent past on consultation responses (i.e. 'thanks but no thanks') and the UK's evident participation in the bulking of these guidelines, I don't see much room for lobbying swathes of this away, nor for the PRA to "explain" rather than "comply"!
GENERAL GUIDELINES
Guideline 3- As well as nature, scale and complexity, "design, scope and qualitative aspects" of the IM should be considered when allowing for proportionality
Guideline 4
- Any model changes pre-application look like they will be pored over by NCAs, including the associated change approval process
MODEL CHANGES
Guideline 5 - Model Change Policy- Policy should, as well as SCR-related changes, include changes to: system of governance (around model change); compliance with Use Test requirements; appropriateness of technical specifications and changes in Risk Profile
Guideline 6
- Approach to classifying "major" changes is expected to be objective
- Must also take into account specificities of the company (so benchmarking percentage changes against your neighbours may not be that useful)
Guideline 7
- Aggregated change triggers must be considered, not just isolated changes
- Offsetting positives and negatives won't be acceptable to avoid a "major change" trigger!
Guideline 8
- Major/minor changes must be determined at Group and entity level
USE TEST
Guideline 9- "No complete and detailed list of specific [model] uses" will be supplied by NCAs.
Guideline 11
- Granularity of the Risk Management System will need to match the IM in terms of categorisation
- The "structure of decision making fora" will be assessed in ensuring the IM fits to the business - extraordinary!
- Records expected to be available to show how IM outputs are designed
Guideline 12
- Assessment of training, seminars, workshops, meetings and direct interviews "should be considered" in pre-application
Guideline 13
- Will need to "ensure [IM] will be used" during pre-application, as opposed to "use it"
- Expectation that, if other tools are used in decision making, IM is improved having assessed inconsistencies against said tools
Guideline 14
- Evidence of prospective support and retrospective verification of decision making would be advisable for candidates
Guideline 15
- Must document where model is not aligned to the decision ultimately made
ASSUMPTION SETTING/EXPERT JUDGEMENT
Guideline 19- "Materiality" in the context of assumptions will need to be both qualitatively and quantitatively assessed - should generate some healthy Risk/Actuarial function debate!
Guideline 20
- A validated and documented process for assumption setting and expert judgement will be required
- Sign-off on assumptions will need "sufficient seniority", up to and including AMSB
Guideline 21
- A formal and documented feedback should be maintained between assumption setters and users
Guideline 22
- On the transparency of assumption setting, point 1.64 here effectively asks for an Assumptions Register, as well as dictating what it expects to see in it.
Guideline 23
- Process mapping of some kind expected for the validation of assumption setting
- Independent assumption review is also expected - doesn't dictate whether this should be internal/external, but it will keep someone in clover no doubt.
METHODOLOGICAL CONSISTENCY
Guideline 26
- Methodological consistency to be validated
P&L ATTRIBUTION
- Point 1.105 seems to confirm P&L attribution by risk driver is required
Guideline 39
- P&L attribution must be used at least annually in the decision making process
Guideline 40
- P&L attribution to be used in the validation process (specifically, old ones to be compared against experience
VALIDATION
Guideline 41 - Validation Policy to contain at least
- Process, methods and tools, and their purposes
- Frequency of validation for each part of the IM, and triggers for ad-hoc validation
- Persons (not roles) responsible for each task
- Procedure to be followed where reliability of IM is questioned, and ensuing decision making process
Guideline 42
- Shies away from touching Internal Model scope when talking of validation scope - great move!
Guideline 43
- Evidence of sensitivity testing expected when determining materiality
Guideline 44
- Must document known limitations of validation process, as well as circumstances where the process falls over
- May even be asked to quantify the degree of uncertainty!
Guideline 45
- A documented escalation path would be advised
Guideline 46
- Risk Management function will be pressured, as the function with overall responsibility, to ensure all tasks are completed (if not directly performing them) - new skill set?
Guideline 47
- Evidence of how the RM function ensures that the validation process remains independent of IM design and ops should be collected/enhanced
Guideline 49
- A process will be expected to ensure the choice of validation tools used considers; complexity, nature, independence and knowledge of participants - feel this could be tricky for the smaller IMAP guys without leading to additional spend on consultants
Guideline 50
- Must be able to document the appropriateness of the validation tools used accounting for; materiality of IM part, granularity of the data being tested, purpose of the task and the expected outcome
DOCUMENTATION
Guideline 53
- Expectation of a "...clear referencing system [for IM documentation] which should be used in a documentation inventory"
Guideline 55
- An overall summary of IM shortcomings, "consolidated into a single document" will be expected
- This needs to cover at least; Risks not modelled, limitations in modelling, sources of uncertainty in results, data deficiencies, external models/data, IT limitations and governance limitations.
Guideline 56
- Potential suggestion that there should be more than one level of IM documentation to suit other audiences/uses - IM for Dummies anyone?
Guideline 57
- End-to-end User Manual expected which an Independent Knowledgeable Third Party could operate
Guideline 58
- Stress that a single document containing all model outputs (as Use Test evidence) is not required - acceptable as single docs
EXTERNAL MODELS AND DATA
Guideline 60
- Expectation that external data sets will be sense-checked against "other relevant sources"
Guideline 61
- Understanding of external models must extend to technical and operational aspects, as well as assumptions
Guideline 64
- "Material" assumptions of external models must be validated
- In point 1.163, any potential for cherry-picking features/options of external models is constrained
No comments:
Post a Comment