ArticleMachine Learning Data

Ethical AI: 5 key principles for insurers

Jen Tan
Jen Tan24-Apr-2024

AI can enhance the efficiency and profitability of insurance companies by automating manual tasks, improving risk assessment and much more.

Ethical AI: Five key principles for insurers

It is no longer a niche technology, either; Genpact revealed this year that 87% of carriers are investing more than $5m in AI-related technologies each year.

However, improper implementation can lead to negative outcomes, such as denying legitimate claims or rendering certain demographics ineligible or more costly to insure.

At Artificial, we recognise the immense potential of AI while also understanding these concerns. As we continue to support and adhere to regulatory frameworks like the EU's AI Act launched in December 2023, our business model inherently guards against the risks of unethical AI use, ensuring our technology serves to support rather than dictate underwriting decisions.

By exploring five key principles for ethical AI, this article aims to provide a clear starting point for insurance firms who want to further consider the implications of this powerful technology on their business.

Safety of AI systems

Insurers should ensure that their AI systems are safe, technically robust and performing correctly. This begins with meticulous attention to the quality of all utilised data, so that it remains relevant, unbiased, complete, and regularly updated. Otherwise, the risk of skewed results and erroneous conclusions significantly increases.

Next, insurers should rigorously test and audit their AI algorithms. These practices not only validate the accuracy of a company’s systems, but also help to identify and rectify any vulnerabilities.

And speaking of vulnerabilities: safeguarding against cyber threats is also paramount to safe AI use. Insurers should enhance their cybersecurity to prevent cyber attacks and the threat of data poisoning or incorrect decision making.

Lloyd’s of London considers this so crucial that they recently launched a report on Generative AI and the cyber threat landscape, highlighting how important it is for ‘businesses and the insurance industry take proactive steps to manage the potential changes to the threat landscape.’

Fairness of data for training and applications

AI applications should be fair, unbiased and not discriminate based on race, gender, age, religion, disability, or any other characteristic when used for insurance pricing and claims handling. Efforts to prevent gender discrimination have already been made in insurance, such as the Gender Directive in the European Union, prohibiting insurers from using gender as a factor for determining premiums and benefits.

Insurers should also ensure that the data used to train AI models is representative of the entire population. For example, a Marine Hull insurer with a history of insuring a small number of cruise ships may find their data-driven machine learning model under-prices cruise ships compared to other vessel types.

To prevent under-pricing that could result in loss-making business and job losses, or over-pricing that is unfair to the insured, insurers should identify risk groups with incomplete data and improve predictions for these groups by obtaining external data or using expert adjustments.

Explainability and accountability of models

Even though explaining ‘black boxes’ like machine learning models and neural networks can be challenging, insurers should keep transparent documentation about the data used to train AI models, the technical details of the algorithm, and how AI influences decision-making.

A growing amount of research in explanatory artificial intelligence (XAI) aims to make AI systems more transparent by providing explanations in some detail. If a system's explainability cannot be achieved, accuracy might need to be compromised.

In all cases, insurers should establish clear lines of accountability for any mishaps, enabling them to respond and rectify the situation straight away.

Privacy by design

Given the nature of the insurance industry, insurers often handle personal data. It is therefore important for insurers to ensure privacy while implementing AI into their business, fostering trust with their customers and meeting regulatory requirements.

By adopting a ‘privacy by design’ approach from the outset, insurers should embed privacy and data protection principles at every stage of the lifecycle of their AI systems. When designing AI solutions, they should proactively assess the potential negative impacts of AI deployment on policyholders and society and implement measures to protect personal data and mitigate privacy risks.

From data anonymisation to encrypting sensitive data and providing transparency about how data is used, insurers must ensure compliance with privacy regulations such as the General Data Protection Regulation (GDPR) in the EU.

Respecting human autonomy

Total reliance on AI technology and algorithms should be avoided by ensuring human oversight throughout the AI system’s lifecycle. AI applications should respect human autonomy and decision-making.

Having a human in the loop is crucial to ensure safety, continuous learning, and solve potential issues from the AI systems, and will ultimately empower human underwriters rather than replace them.

Artificial has long believed in such an approach, whereby underwriters always have oversight and control of decisions made by an algorithm. By adhering to these ethical principles, insurers can benefit from AI applications while maintaining the trust and confidence from their stakeholders.

If you’d like to know more about implementing AI in your insurance business, get in touch.

Never miss an update

Sign up for the latest insurtech insights with our mailing list and receive only the most relevant articles delivered straight to your inbox.

By submitting your details, you agree to receive occasional marketing emails from us. We will never pass your details to any third parties. For further information on how we manage your personal data, visit our Privacy Policy.

We and selected third parties use cookies or similar technologies as specified in the cookie policy. Learn more