Subscribe

Solving insurance's sizable data problem

The rush to get ready for IFRS 17 shows the insurance industry is still seriously lagging when it comes to managing its data effectively.
Yushaa Abrahams
By Yushaa Abrahams, CEO, Astus Consulting.
Johannesburg, 22 Feb 2023

The recent scramble to get ready for IFRS 17 underlies a systemic issue that insurers need to address, not only to position themselves for the next regulatory requirement but also to improve their processes.

IFRS 17, the latest regulation from the International Accounting Standards Board, came into effect on 1 January. IFRS 17 sets out to improve how insurers report to stakeholders.

It's a complex regulation, but it basically requires insurers to report on their book with a greater degree of granularity, with a new emphasis on understanding how it impacts the profit and loss statement instead of just the balance sheet.

Effectively, this means actuaries and accountants or financial managers must work more closely together.

The process of getting ready for IFRS 17 demonstrated again the basic fact that the insurance industry is seriously lagging when it comes to managing its data effectively.

Typically, most insurers will have investigated fixing their systems to comply with IFRS 17 and, finding the solution too expensive, put in place the minimum needed to comply − adding yet another layer of complexity to the data environment.

And, of course, when a new requirement emerges, as it inevitably will, yet another process will have to be bolted onto the system.

Data proficiency means insurers can interrogate their data more deeply to understand their business better, and identify improvements and new business opportunities.

The inability to manage data effectively is a critical issue because, as many other industries have discovered, data (and the ability to use it effectively) is the key to competitive advantage in the 21st century.

Data proficiency means an insurer can report more quickly and comply with existing and new regulations more easily; data-proficient insurers can reduce costs and time associated with managing their data.

More important still, data proficiency means insurers can interrogate their data more deeply to understand their business better, and identify improvements and new business opportunities.

Most insurers battle with a heterogeneous data environment with multiple policy administration systems. During every reporting cycle, the extracts from various policy administration systems need to be transformed and validated in preparation for running through the actuarial models. This largely manual process can take many team members anything from three to 10 days to complete.

Finding solutions

This kind of approach is obviously counterproductive − it's messy, clumsy, time-consuming and expensive.

Worst of all, it generates huge and growing amounts of duplicated data, an environment that is hard to manage, has huge operational risk and cannot easily be interrogated to generate useful insights for the business.

The obvious solution is not a new data management system. For one thing, it would be enormously expensive and would involve tampering with the "system of record", something most insurers are understandably reluctant to do.

A better, more elegant approach is to create a parallel "system of insight" that draws its information from the system of record, but in which the data is properly managed and allows for innovation based on the improved insight provided.

Actuarial and insurance industry firm MBE Consulting advises that good practices require governance processes to be in place regarding the management of data quality. It emphasises that having good quality data is prerequisite to providing reliable results for financial reporting and in producing management information that can be used for confident decision-making.

In this context, proper data management would involve generating a set of business rules in terms of which data would be captured, as well as metadata.

Metadata would identify the document and various other elements about it as determined by the business rules. This information about the data would be available on the document in the system of insight.

This approach is known as document-centric storage, which means all the information relating to, say, a policy is stored in one place, including any changes made. Metadata would make it easy to identify any documents that are new, or to which changes have been made, and these alone would need to be processed for the next reporting cycle − the vast majority of the policies would not have changed and therefore would not need to be reprocessed.

As a result, the data extraction and processing for each reporting cycle would be greatly reduced, as it would involve only new or changed data, and much of the process could be automated. This would save on time and manpower, but it would also radically reduce the amount of processing power and storage needed, leading to significant savings.

In other words, the metadata could be easily used by an appropriate technology tool to streamline the processing of the data, and to make it available to the company's various systems.

In addition, it should be evident metadata would make it much easier for the insurer to comply with future regulations, whatever they might be. No customised processes have to be created and bolted on for a regulation; the data is properly catalogued and can thus be used as needed.

Clearly, a key element here is to identify the right technology tool to enable this approach. Equally important, the insurer needs to work with specialists who are versed in the ever-changing regulations and developing the all-important business rules that create the metadata, and who will guide the process of creating the system of insight.

This combination of the right technology and the right people can position insurers to manage their data better and more cost-effectively, and go beyond compliance to leverage their data to make them more competitive.

Share