Showing posts with label Scenario Analysis. Show all posts
Showing posts with label Scenario Analysis. Show all posts

Monday, 29 September 2014

CRO Forum's Principles on Operational Risk Measurement - "Quant touch this"...

Hammer Time?
Current efforts in Op Risk quantification
Despite practitioners efforts over the last few years, Operational Risk continues to live on starvation rations when it comes to considered quantification. Never treated as an alpha-topic by executives inside insurance institutions, it has been treated with similar indifference by legislators, culminating in the  "totally inadequate" take-a-percentage methodology for calculating Operational Risk capital in the Standard Formula.

Internal Modellers on the whole are not likely to be shaming that technique with their efforts either (basic summary of their problems here, while InsuranceERM cover struggles as a whole with a roundtable here). A paucity of operational risk event (and near miss) data within firms may be good news for ORIC as a vendor, but from a parameter and data uncertainty perspective, it leaves internal model operators and validators in an invidious position, particularly due to the quantum of insurers' capital likely to be involved (10%, give or take?).

It's not that the actuarial world hasn't taken a stab at it before (here), aren't fully aware of the data holes (here), or haven't used the word "Bayesian" in a sentence (here). However an activity which was "in its infancy" in the UK as far back as 2005, is surely now old enough to be working in the mines...

I was therefore happy to see the unprolific-yet-important CRO Forum bring a white paper to the table, Principles of Operational Risk Management and Measurement. It is an update to a 2009 version which takes into account Solvency II demands, as well as developing practice within insurers over the period, the suggestion being that 2009's efforts were a little too Banking Industry-influenced.

While this document might feel at outset like an idiot's guide to "quanting" operational risk (and bearing in mind the number of prospective standard formula applicants - 9 out of 10 in UK - one may be needed soon!), the document touches on a number of noteworthy technical matters, in particular;
  • The Definition section doesn't read well, but they have attempted to include outcomes other than monetary loss into the Op Risk definition, which from experience will improve discourse within firms. Are they attempting to squeeze strategic and reputational risks into this box though?
  • Nice coverage of Boundary Events, and encouraging firms to consider them in their management of Op Risk.
  • Very specific treatment of Risk Tolerance throughout, using it in preference to Risk Appetite. This is because it cannot be avoided, and so tolerance levels should be used to trigger "RAG"-type reporting up the chain. Nice work, and well justified, but I have certainly seen the expression "Zero Appetite" used for Op Risk, so no doubt this is not an industry standard perspective yet! (p5-6)
  • No problems with their coverage of tried and tested techniques - "Top Down", RCSA's & Loss Event analysis (p9-10)
  • Nice turn of phrase regarding emerging risks on p9 - "...assess the proximity of new risks to the organisation". It may need to include an attempt to quantify to be fully useful for ORSA purposes.
  • Concept of residual risk arrives quite late in the day, but isn't omitted. Important, given how much qualitative, or spuriously quantitative, material is being promoted as aiding this measurement work (p10)
  • Seem to accept at the bottom of p10 that Internal Modellers must do more than curve fit on internal Op Risk Event data - good news I guess.
  • Internal Model validation pressures on current Op Risk quantification practices flagged directly (p16 in particular)
  • Guidelines on embedding Op Risk monitoring processes highlight just how much work some practitioners are managing to cover (p11). Quite disheartening for those with smaller budgets.
Ther are a few points to make on section B around quantification:
  • Pretty scathing on Standard Formula relevance. (p14)
  • Scenario Analysis sold as something of a panacea to cure the ills of incomplete Op Risk Event data sets, but no mention of the biases which seem to permeate the creation of the scenarios, which is sadly a hostage to the invitee list. (p14)
  • Expand more on scenario analysis, bringing the "severe but plausible" terminology to the table (p15)
As well as the following generic comments;
  • Is risk measurement - "a tool for embedding risk culture in the organisation"? I would say so, particularly in the Op Risk arena, where decision makers will need to be involved at scenario-compilation time.
  • That said, they then go on to reference "senior management sign-off" of scenario work, which is somewhat contradictory!
  • Overweight in references to "culture" and "tone at the top", like most white papers these days (see the FRC's efforts from the other week). Playing with fire as a profession by shoehorning references to "culture" into everything.
  • A couple of horror-show schematics used on pages 7 and 8 - the Forum must know how much time risk professionals lose walking non-experts through things like this. They serve no purpose, and detract from surrounding text.
  • Attempt on p9 to solicit business for ORIC?
It was Professor Jagger who accurately prophesised "You can't always get what you Quant" - I'd say the Risk profession concurs, based on these very welcome principles.

Tuesday, 21 May 2013

Lloyds of London on Validation - testing and reporting enhancement ideas

Lloyds of London delivered a presentation around model validation last week to its 80-odd syndicates which anyone in the world of IMAP would benefit from picking through the bones of, bearing in mind the rather unique position of the Lloyds application (i.e. in the door of the PRA, and seemingly well received!).

They had noted in that presentation linked to above that they had found some weaknesses in 5 areas in particular, so this presentation is a deep-dive examining the strengths and weaknesses of validation - one may expect the other 4 'weak' areas flagged may receive similar treatment in coming weeks.

While their 2013 programme aims only to close the gap between full compliance with Solvency II tests and standards and today's position, it's worth flagging some fundamentals;

  • Only half of syndicates felt to meet tests and standards in full - a third are 'pending' positive assessment, the rest have not passed.
  • 'Fails' seem to be centred around following up on test failures and documenting findings in the summary report, rather than anything broader.
Around the production of Validation Reports, they noted negative findings around;
  • Uncertainty about how to progress when something 'unacceptable' is found during validation testing
  • Content of validation reports being statistic-heavy (i.e. indigestible to any non-quants who need to make decisions off the back of the findings)
  • A lack of sophistication in the testing of material risks in some instances.
The last one is particularly interesting, as the central team at Lloyds has devised a schematic (slide 11) to show the kind of testing they expect to see on the more material risks (RST, P&L attribution) versus less material (going as far as qualitative tests).

It is also worth highlighting for any benchmarkers out there that Lloyds appear to advocate around 5 pages of Validation Report per risk factor, leaving their overall expectation of reports to be 30-40 pages, with 5-10 pages of appendices (p16). Bearing in mind these reports will I suspect be some of the first to go through the PRA's hands, the frame of reference may help encourage you to bulk up or slim down your own versions!


A large amount of this presentation (from p19 onwards) is devoted to fairly granular examples of how a validation test may be 'failed', and what action would be performed in order to gain a 'pass', so for those in the test design/conduct game, you may find something to support your approaches in that detail, regardless of the risks shown in the example (premium and reserve).

Friday, 17 May 2013

Munich Re on Strategic Risk - ORSA food for thought

A thought-provoker from Munich Re for anyone in the business of Emerging Risk/'Top Down' risk review activity with this release called "Strategic Risk to Risk Strategy", which leans on the findings of the World Economic Forum's 2012 Risk Report to observe how the risks highlighted may compromise existing insurer strategies, as well as materialise into the thoughts of underwriters once sufficient data exists.

With coverage of "strategic risk" featuring in everyone's ORSA thoughts, as well as obligations around scenario analysis and reverse stress testing, there are some benchmarks in here that are worthy of consideration for practitioners. In particular;

Strategic Risk definition
Risk of making wrong business decisions, implementing decisions poorly, or being unable to adapt to changes in the operating environment


Subcategories of Strategic Risk
  • Ineffective M&A
  • Incorrect interpretation of external activity in a given market
  • Decision making based on poor pricing/profitability assumptions
  • Legal misinterpretation
Specific points
  • In their opinion an insurer's risk strategy "goes beyond covering the risk capital requirement for a portfolio for the forthcoming financial year on the basis of valid models" to actually questioning and enhancing a company's business. I think most practitioners would agree that any documented risk strategy would look further forward than one-year! 
  • That shortcomings around the evaluation of Strategic Risk "...are not so much of a question of the [informational] resources available", which are plentiful, nor are they especially time-pressured.
  • They also include a definition of Reverse Stress Testing as scenarios which "endanger a company's business model as a whole" - interesting purely in the absence of an EU-driven definition to-date, as it tallies along with UK equivalent definitions.
An edited list of Strategic Risk scenarios is included (p3) which you may want to line up against your own activity in this field, and follow on by exploring which of these are within and outside of an insurer's sphere of influence. You might also benefit from the diagram on p6 on the main stakeholders in an insurance company which may influence your selection of strategic risk scenarios depending on your structure and business model. 


Friday, 22 February 2013

Adams speech to the Economist Insurance Summit - lessons from financial crisis

Some particularly useful context setting from Julian Adams last week for anyone in the Internal Model game, with this speech to the Economist Insurance Summit around what lessons could be learned by insurance supervisors from the financial crisis.

While he amusingly interchanges between "financial crisis" and "banking crisis" to emphasise that it wasn't our fault, and drops in the now obligatory reference to the importance of insurers as long-term investors, echoing the Commission's pleas from late last year, the majority of the speech focuses on why models go wrong (not the name of a ropey catwalk reality tv show...)

Insight on where the FSA thought firms were going awry in the Solvency II modelling preparations was delivered to the industry in the middle of last year, but I found this speech helpful in the context of proportionality i.e. what elements of economic capital modelling are worth spending extra time on theorising, documenting, debating and minuting for IMAP candidates. I saw the following comments as highlights;

Reasons for internal models in the banking industry being exposed;

  • "...rested on assumptions which turned out not to hold when bad times came"
  • "...review period" selected when parameterising
  • "...insufficient rigour and independence from the front end of the business" when parameterising
  • "...management attention too often focused on those parameters considered too conservative at the expense of those that were insufficiently prudent"
  • "...destabilising feedback loops" where underestimation of risk (due to data selection) plus use of the model leads to a vicious cycle of unacknowledged over-accumulation of risk
  • "...flawed technical assumptions" in tail-end probability estimation where data is drawn from "normal" times
Lessons for Solvency II
  • "Data [should be] sufficiently robust"
  • Assumptions should be "appropriately conservative"
  • "[Supervisors] can be helped...by the much greater use of imaginative tests of resilience to deeply stressed scenarios"
  • "...paucity of relevant historical data for the calibration of tail dependencies between risks"
  • That "...the limitations [of capturing tail dependencies] are recognised, and conservatism built in to the calibrations"
  • That "...correlations in the tail are likely to be assymetric in nature" for insurers
  • That "...the adoption of quantitative techniques...will not change the nature of the risk itself"
  • That supervisors "...must not blindly accept the outputs of these models"
Appreciating some of this is hardly new news, any increased documentation and rigour in the areas highlighted will no doubt be well received down at the Wharf.

Thursday, 22 November 2012

Accenture study on Risk Analytics - lessons for Insurance Industry

Decent piece of benchmarking from Accenture on the current usage of risk analytics as well as drivers for the future. 450 mostly c-suite level respondees from across industries (40% insurers), but the findings are targeted specifically at the Insurance industry. Interestingly, respondees leaned towards incomplete data sets, rather than a lack of data or technological capability, as the main constraint.

While it touches on a number of areas of interest for Use Test specialists, it also covers Stress and Scenario Testing (13% of Life and 21% of P&C insurers reporting that they "rarely" or "never" use stress testing in decision making), and Reporting (which suggests the main driver for reporting improvements is regulatory rather than voluntary, due to regulators "...[increasing] their focus on the quality and frequency of reporting"). Internal Modelling also gets a mention, with almost 80% of respondents saying they already use, or are planning to use, an internal model for capital adequacy requirements.

Data Governance  of course gets decent treatment here. Only 69% of insurers polled currently have a Data Policy, but 41% have a data quality department, which feels alarmingly high, particularly when drawn against Accenture's comment that "...many firms have insufficient rigor who owns data, who sets it up and who manages it". The FSA concurred with that in their preliminary Data Quality Review findings back in September, and I'm not at all convinced that a DQ department will help in this respect (i.e. BAU absolve themselves of responsibility for their data sets!)

The Accenture crew use the "Leaders" and "Laggards" analogy throughout, so benchmark away and find out which one you are!

Thursday, 18 October 2012

Society of Actuaries in Ireland on ORSA - a rock in a sea of turmoil

In these days of certainty around the Solvency II implementation timetable (i.e. certainly not 2014!), it’s nice to cling on to the consultant's comfort blanket of ORSA which, thanks to the IAIS and NAIC, isn’t disappearing in a hurry for global insurers. This item flagged on the SAI's newsletter last month, but dating back to April, would benefit anyone working in the ORSA space, being a "practical considerations" guide which is very accessible for non-actuaries, particularly for assessing or challenging options for the required ORSA processes.
 
The document takes care to reference the at-the-time EIOPA guidelines in each chapter (which would have been superseded by June's release of course, but didn't change seismically), and does an excellent job of focusing on required processes as opposed to ORSA Reporting, which these types of papers often do. Worth reading all but noting the following;
Overview
·     Proportionality - remember justification of approach is as important as executing the selected approach itself.
·     Documentation - "...Traditionally, this is not an area of strength for actuaries" - I'll drink to that!
ORSA Contributors
·     "It is likely" that Risk will co-ordinate the process. This logic follows on from the IRM’s survey findings (p2) which saw Risk as predominantly leading early process development, and there is nothing to suggest the other candidate functions are likely to be sufficiently staffed in the BAU world to both actively participate and co-ordinate.
·     Board as "owners” of the ORSA – this is an important point which, for practitioners, is awkwardly inferred by the Directive text, rather than spelled out. Indeed this document later goes on to say the Risk function "will likely be the owner of the overall ORSA process".
This gruesome melange of who owns what in the ORSA space (and indeed what 'ownership' confers), remains a little too common in thought papers like this, so be certain to define these elements in your ORSA Policy.
·     Capital Management function - "ORSA is the process where risk and capital management get together" - get a room you guys!
Policy and Process
·     Generally a very clean and useful section, particularly around "dynamic" and "static" processes and their outputs. Section on ORSA Report content is less useful, being based on the 2008 issues paper, and there are plenty of papers covering that topic (sift through yourself!).
·     Projection process - No suggestion of whether recommendations from balance sheet projection activity should be balled up in the ORSA Report or reported separately as part of conventional committee/Board reporting. I always found this element a nuisance to pin down, as one wouldn’t necessarily want to present material of such significance in a 20-200 page ORSA Report if it meant it didn’t get the appropriate table time at strategy days etc.
Economic Capital 
·     Practical obstacles - all seem to revolve around there being a shortage of actuarial time/resource. Well get off my land and go do some counting then!
EC and Risk Management
·     Document is a little unclear around risk appetite framework/risk management framework/risk management system terminology, which is a little unhelpful
·     Interesting comment regarding non-quantification of risk that "risks cannot be quantified" rather than "risks cannot be quantified easily" – this is an actuarial paper, you guys can quantify anything, surely!
·     Define reverse stress testing as "testing to destruction" - the UK definition is more discrete than this, and certainly more useful for stimulating debate in Board exercises
ORSA Projections
·     More industry consensus on what 'business planning period' constitutes, being 3-5 years. Barely seen anything to suggest firms venturing outside this window for projection purposes.
·     Nice simple explanation of the component parts of the economic balance sheet which should be projected as well as recommendations for projecting risk appetite metrics and the P&L.
·     Suggestion that, unless already stochastically projecting, firms will project deterministically, "unless the company is planning significant changes to its future business mix". Judging by the jostling for position around Long Term Guarantees right now, is that not likely to be quite a few!
·     Acknowledge that the approaches already used for Financial Condition Reporting should be leaned on for smaller or less complex entities.
Scenarios
·     Reverse stress testing has grown into a different beast from that reference earlier in the piece, incorporating "back-solving" (new one on me!), and looks for events that reduce own funds to zero - not sure I've heard RST defined like that before, and certainly not convinced that own funds of zero necessarily constitutes "destruction"
·     Good recommendation for selecting scenarios from emerging risk assessments as well as a firm’s existing quantum – best not to take the path of least resistance in this area of ORSA.

Monday, 20 August 2012

Operational Risk - Scenario analysis and best practice

Short and sweet - couple of interesting papers in the Op Risk space which should help anyone working on operational risk scenarios or indeed brushing up on best practices.

Milliman start off with this scene setter on approaches being adopted in order to bypass the rather broad brush (and I suspect in some cases, financially onerous) standard formula approach to calculating the Op Risk SCR element. They of course touch on the old-but-legitimate complaint around imput data quality if one wants to model their capital requirement rather than sketch it on the back of EIOPA's fag packet.

While they take the opportunity to applaud the efforts of those creating a database of scenarios, or indeed using the ORIC database, they ultimately come down on the Bayesian side of the debate, which I suspect is a touch too rich for most people's blood, but those of us with deep pockets (and large Op Risk SCR totals!) may give that a stab.

The second piece came from Corven around best Op Risk practices from other industries, and how they could be adopted by the Financial Services industry. Not much of the research is actually published yet (and the main meat of their published findings is hidden behind FT's paywall), but I found it particularly interesting to see which industries were cited as areas where Financial Services could learn from.

Some good interim stats (full report to follow in October), including;
  • All respondents to date trying to tie in op risk performance with compensation
  • Regulatory hounding appears to have inspired 64% of respondents to inprove Op Risk management
  • Full root cause analysis only conducted by 38% of respondents upon a "major risk failure" - woolly words aside, that is not impressive at all.
  • Responses to major risk incidents overwhelmingly look to amend processes and systems, not the people and capabilities that inevitably led to them!
The example of air crews being compelled to point out senior staff members' inadequacies is a particularly powerful example of bottom-up op risk mitigation, though I struggle to see its application in financial services. However, it was also strange to see the Oil industry also cited as a best practitioner - the major risk events in that industry surely draw parallels with financial services at their most grasping over recent years.

Wednesday, 18 January 2012

IRM Solvency II SIG Presentations from January - Stress Testing

For anyone who is working on firming up their approach to stress testing and its cousins for Solvency II purposes, I would recommend taking a look at the outputs from this month's IRM Solvency II Special Interest Group.

The presentation has some worth, although I guess you needed to be there to get the full gist. The survey however is of much more use for justifying any approaches you currently have in play (who is responsible for what, and indeed what varieties of stress testing are actually used). Small sample as you might expect for a SIG at 28 respondents, but worthy nonetheless.