A timely warning to insurers that expect AI and algorithms to automate claims decisions.

“Goldman and Apple are delegating credit assessment to a black box,” Hansson said. “It’s not a gender-discrimination intent but it is a gender-discrimination outcome.”

The use of algorithms by lenders in credit decisions has drawn scrutiny in Congress. In June, the House Financial Services Committee heard about examples of algorithmic decision-making where researchers have found instances of bias targeting specific groups even when there was no intent to discriminate.

 The three lessons are these: 

  1. Biased algorithms lead to biased decisions
  2. Policy holders can demand to know how decision made
  3. Algorithm only as good as data

Most insurers only access a small amount of the data hidden in silos and the unstructured content of emails, documents and metadata. Without accessing the majority of relevant data, AI is of limited value.

Use solutions such as 360Retrieve to access and join all data from structured to unstructured before embarking on AI plans.

And remember that a well configured self-service FNOL claims platform will ensure most fraudulent claims never get beyond FNOL to clog up claims. 30% to 35% of homes claims will be withdrawn as policy holders see their false and exaggerated claims exposed when photographic and video evidence is requested.

Ai is not a strategy, as some think. It is a component of strategies and only beneficial when the necessary master data management skills and resources are adopted. And even then, as Apple and Goldman Sachs show, there are other potential problems to anticipate.