Step forward Data Quality! With the considerable efforts expended by UK internal model applicants already on plugging their calculation kernels in, risk calibration, loss function fitting, correlations etc, the FSA's latest review findings take us right back to the starting point of SCR generation - data inputs - and they are not impressed.
The FSA began working on this topic with the industry as far back as this time last year, and are not scheduled to be finished with this thematic review until Q3 2013. Bizarrely, they note in the introduction to these review findings that their scoping tool released in July 2011 aimed to help assess compliance with both Level 1 and draft Level 2, which wasn't released to the industry (i.e. leaked) until late October - quel chance mes amis?
Splitting hairs on timings aside, just reading the five section headings of their review work would be enough to reduce many BAU staff to a quivering wreck ("Implementation of the Data Policy"? What, today?), so I wasn't expecting a glowing report. That said, the quality of data which ultimately results in today's technical provisions, capital requirements etc is seemingly fit enough for purpose, so a full-on hatchet job would be a poor reflection on both the industry and the regulator.
Assuming a 2014 go-live date (looking unlikely as of 9pm GMT today!), the areas of major concern for insurers, based on these preliminary findings, would be;
- Difficulty in assigning data ownership - there will be enough Pontius Pilates in the BAU world who will happily wash their hands of data ownership until the cows come home. Programmes will need to be extremely forceful in assigning ownership and ensuring it sticks
- Inability to articulate "accurate", "complete" and "appropriate" - this should have been an easy win, so I'm surprised that it is seemingly an issue. Realistically, should we expect the business to take ownership of data sources when we cannot define what is and isn't acceptable output from them?
- Data Dictionary/Data Directory confusion - a suite of pretty scathing findings in this field, suggesting both over-simplicity and over-complexity has been found in the workings of Data teams.
- Spreadsheet controls and non-compliance with end user computing policies - onerous expectations on the face of it (paragraphs 4.41 and 4.42), which will be a shock to both programme budgets as well as end-users.
Some other interesting points made in the review include;
- Firms either using their Risk Committees, or a bespoke "data steering" committee as their data governance body - pretty sure the Risk Committees won't fancy this as long-term work.
- A number of suggestions as to what areas are not currently being consistently addressed when assessing materiality (p11-12)
- Some very useful comment around data classification methods (p13)
- Suggestion that, as I expected, the techniques applied to assessing the quality of data provided by third parties is not robust enough - industry-wide consensus on how to interrogate your outsourcing parties would be useful in this respect.
- A rather strange comment around poorly designed/controlled data warehouses - I can only assume they have seen one or more horror stories on their travels, as the warehouse is surely the way to go!