Article

Building trust and ensuring ethics in AI

Establishing a framework for responsible AI innovation that meets ethical standards and fosters stakeholder trust
Published

24 February 2025

AI holds immense potential to revolutionise industries. And with this potential comes the responsibility for companies to develop and deploy AI systems ethically, transparently, and with the right controls. Drawing on our comprehensive research, analysing over 1,000 AI use cases and the impact on operating models from the world’s 200 largest insurers and leading solution vendors, this article explores how insurers – and organisations in general – can build trust through ethical AI practices, transparency, fairness, and robust control mechanisms.


The importance of trust

Trust is essential for the widespread acceptance and success of AI systems, not just in insurance but across various industries. To succeed, all stakeholders – including customers, regulators, and employees – must feel confident that AI technologies are being used responsibly.

Trust is built on four key pillars:

Transparency: The key to building trust


Transparency is critical for fostering trust in AI systems. It involves making the inner workings of AI models understandable to technical experts and non-experts alike. This can be achieved through:

  • Explainable AI (XAI): Developing models that can provide clear explanations for their decisions. This is particularly important in high-stakes industries like insurance or healthcare, where decisions directly impact people’s lives.
  • Open communication. Organisations should communicate openly about their use of AI technologies – what data is being used, how decisions are being made, and what safeguards are in place.

Take Allianz, for example. The company has taken a proactive approach by building a responsible AI framework that centres on data ethics and the interests of all stakeholders. By making transparency a core part of their processes, Allianz helps ensure that both customers and regulators trust how they use AI.


Fairness: Ensuring equitable outcomes


If not carefully managed, AI tends to amplify biases present in data, which is why ensuring fairness requires a commitment to continuously monitoring and improving algorithms. Key strategies include:

  • Bias audits: Regularly auditing algorithms to identify any biases that may have crept into decision-making processes.
  • Inclusive data sets: Ensuring that training data represents diverse populations so that models do not disproportionately favour one group over another.

Generali's approach to fairness follows the S.A.F.E methodology (Security, Accuracy, Fairness, Explainability), which guides the development of their algorithms. This helps ensure that their systems operate fairly while maintaining high standards of accuracy and security.


Accountability: Taking responsibility for AI outcomes


Accountability is about ensuring clear lines of responsibility when things go wrong with an AI system. This includes:

  • Human oversight: For critical decisions, like those in healthcare or financial services, organisations should keep a human in the loop to oversee automated processes.
  • Error management: Having mechanisms in place to detect errors early and correct them before they cause harm.

Many organisations have implemented frameworks to ensure accountability, among them Allianz, which has established cross-functional teams to embed Privacy by Design principles throughout their AI implementation process. This approach ensures accountability at every stage of development.


Controls: Safeguarding against risks


Robust control mechanisms are essential for managing the risks associated with AI technologies. Our research shows that the top 200 insurers globally often implement the following controls:

  • Risk management frameworks: Implementing frameworks that assess potential risks associated with deploying AI systems – such as Zurich’s risk management tools designed to monitor model performance.
  • Regulatory compliance: Ensuring compliance with relevant laws and regulations such as GDPR or HIPAA. Many organisations also collaborate with regulators to shape adaptive policies for responsible AI use.

Munich Re is worth highlighting for the way they engage in industry discussions through forums like the MAS Veritas Consortium to develop responsible AI principles. This proactive approach helps them mitigate risks while ensuring compliance with evolving regulations.


As the insurance industry (and others) continues to adopt advanced AI technologies, building trust through ethical practices will be key to long-term success. By focusing on transparency, fairness, accountability, and robust control mechanisms, businesses can ensure that their use of AI aligns with both regulatory requirements and societal expectations.


Ultimately, responsible innovation requires a balanced approach – one that embraces technological advancements while safeguarding against potential risks.


Can organisations navigate the complexities of AI, build trust, and foster confidence among stakeholders? With the right collaboration and adherence to ethical guidelines, the answer is a resounding yes.

Related0 4