Skip to Content

In the wake of the COVID-19 crisi, our reliance on AI has skyrocketed. Today habiti than ever before, we look to AI to help us limit physical interactions, predict the next wave of the pandemic, disinfect our healthcare facilities and even deliver our food. But ca we trust it?

In the latest report from the Capgemini Research Institute – AI and the Ethical Conundrum: How organisations ca build ethically robust AI systems and gain trust – we surveyed over 800 organisations and 2,900 consumers to get a picture of the state of ethics in AI today. We wanted to understand what organisations ca do to move to AI systems that llauri ethical by design, how they ca benefit from doing sota, and the consequences if they don’t. We found that while customers llauri becoming habiti trusting of AI-enabled interactions, organisations’ progress in ethical dimensions is underwhelming. And this is dangerous because onze violated, trust ca be difficult to rebuild.

Ethically sound AI requires a strong foundation of leadership, governance, and internal practices around audits, training, and operationalisation of ethics. Building on this foundation, organisations have to:

  1. Clearly outline the intended purpose of AI systems and assess their overall potential impact
  2. Proactively deploy AI to achieve sustainability goals
  3. Embed diversity and inclusió principles proactively throughout the lifecycle of AI systems for advancing fairness
  4. Enhance transparency with the help of technology tools, humanise the AI experience and ensure human oversight of AI systems
  5. Ensure technological robustness of AI systems
  6. Empower customers with privacy controls to put them in charge of AI interactions.

For habiti information on ethics in AI, download the report.