Monday, 29 September 2014

CRO Forum's Principles on Operational Risk Measurement - "Quant touch this"...

Hammer Time?
Current efforts in Op Risk quantification
Despite practitioners efforts over the last few years, Operational Risk continues to live on starvation rations when it comes to considered quantification. Never treated as an alpha-topic by executives inside insurance institutions, it has been treated with similar indifference by legislators, culminating in the  "totally inadequate" take-a-percentage methodology for calculating Operational Risk capital in the Standard Formula.

Internal Modellers on the whole are not likely to be shaming that technique with their efforts either (basic summary of their problems here, while InsuranceERM cover struggles as a whole with a roundtable here). A paucity of operational risk event (and near miss) data within firms may be good news for ORIC as a vendor, but from a parameter and data uncertainty perspective, it leaves internal model operators and validators in an invidious position, particularly due to the quantum of insurers' capital likely to be involved (10%, give or take?).

It's not that the actuarial world hasn't taken a stab at it before (here), aren't fully aware of the data holes (here), or haven't used the word "Bayesian" in a sentence (here). However an activity which was "in its infancy" in the UK as far back as 2005, is surely now old enough to be working in the mines...

I was therefore happy to see the unprolific-yet-important CRO Forum bring a white paper to the table, Principles of Operational Risk Management and Measurement. It is an update to a 2009 version which takes into account Solvency II demands, as well as developing practice within insurers over the period, the suggestion being that 2009's efforts were a little too Banking Industry-influenced.

While this document might feel at outset like an idiot's guide to "quanting" operational risk (and bearing in mind the number of prospective standard formula applicants - 9 out of 10 in UK - one may be needed soon!), the document touches on a number of noteworthy technical matters, in particular;
  • The Definition section doesn't read well, but they have attempted to include outcomes other than monetary loss into the Op Risk definition, which from experience will improve discourse within firms. Are they attempting to squeeze strategic and reputational risks into this box though?
  • Nice coverage of Boundary Events, and encouraging firms to consider them in their management of Op Risk.
  • Very specific treatment of Risk Tolerance throughout, using it in preference to Risk Appetite. This is because it cannot be avoided, and so tolerance levels should be used to trigger "RAG"-type reporting up the chain. Nice work, and well justified, but I have certainly seen the expression "Zero Appetite" used for Op Risk, so no doubt this is not an industry standard perspective yet! (p5-6)
  • No problems with their coverage of tried and tested techniques - "Top Down", RCSA's & Loss Event analysis (p9-10)
  • Nice turn of phrase regarding emerging risks on p9 - "...assess the proximity of new risks to the organisation". It may need to include an attempt to quantify to be fully useful for ORSA purposes.
  • Concept of residual risk arrives quite late in the day, but isn't omitted. Important, given how much qualitative, or spuriously quantitative, material is being promoted as aiding this measurement work (p10)
  • Seem to accept at the bottom of p10 that Internal Modellers must do more than curve fit on internal Op Risk Event data - good news I guess.
  • Internal Model validation pressures on current Op Risk quantification practices flagged directly (p16 in particular)
  • Guidelines on embedding Op Risk monitoring processes highlight just how much work some practitioners are managing to cover (p11). Quite disheartening for those with smaller budgets.
Ther are a few points to make on section B around quantification:
  • Pretty scathing on Standard Formula relevance. (p14)
  • Scenario Analysis sold as something of a panacea to cure the ills of incomplete Op Risk Event data sets, but no mention of the biases which seem to permeate the creation of the scenarios, which is sadly a hostage to the invitee list. (p14)
  • Expand more on scenario analysis, bringing the "severe but plausible" terminology to the table (p15)
As well as the following generic comments;
  • Is risk measurement - "a tool for embedding risk culture in the organisation"? I would say so, particularly in the Op Risk arena, where decision makers will need to be involved at scenario-compilation time.
  • That said, they then go on to reference "senior management sign-off" of scenario work, which is somewhat contradictory!
  • Overweight in references to "culture" and "tone at the top", like most white papers these days (see the FRC's efforts from the other week). Playing with fire as a profession by shoehorning references to "culture" into everything.
  • A couple of horror-show schematics used on pages 7 and 8 - the Forum must know how much time risk professionals lose walking non-experts through things like this. They serve no purpose, and detract from surrounding text.
  • Attempt on p9 to solicit business for ORIC?
It was Professor Jagger who accurately prophesised "You can't always get what you Quant" - I'd say the Risk profession concurs, based on these very welcome principles.

1 comment:

  1. Great post.....The term Operational Risk Management (ORM) is defined as a continual cyclic process which includes risk assessment, risk decision making, and implementation of risk controls, which results in acceptance, mitigation, or avoidance of risk.claims pages

    ReplyDelete