Get in touch

Trustworthy AI: Human Agency and Oversight

In its Ethics guidelines for trustworthy AI, the European Commission recognizes “human agency and oversight” as the first requirement for trustworthy AI [1]. This clearly implies that AI systems should not be built to replace humans, but rather to empower them.

The European Commission’s guidelines clearly reaffirm the importance of human oversight, where human intervention must be possible not only during the design cycle but also — crucially — throughout the operation of the AI system. This has implications both on how AI should be integrated in business processes, and on which information it should provide to its users.

1. Integrating AI systems in business processes

Ensuring human agency for a trustworthy AI requires a careful consideration of the system’s operation and its interactions with end-users. Moreover, Gartner recommends that business leaders should “focus on worker augmentation, not worker replacement” [2]. Therefore, integrating AI systems in business processes with human agency is key both for ensuring trustworthy AI and for making the best use of its predictive capabilities.

Designing for the operation phase, i.e. the way AI systems will be integrated in the business, is a key — and often overlooked — aspect of AI projects. One must carefully consider the existing processes to figure out where the end-results of an AI system will be able to deliver most value.

For instance, say you have built an AI model which — at least theoretically — will double the conversion rate of your marketing campaign to sell new insurance products to your existing customers. While it may be tempting to try and use this to replace the teams which previously handled marketing campaigns, both the trustworthy AI and Gartner guidelines suggest that empowering existing teams will eventually be more efficient.

Thus, it is important to build AI systems which can easily and seamlessly be integrated with existing applications such as CRMs, CLMs or Campaign Marketing Systems.

2. Enabling oversight of AI systems

Providing AI-powered predictions to business users is only the first step in building trustworthy and efficient AI systems: the right information must be delivered in order to empower the users to make informed decisions based on AI models.

First, end-users should be made aware that they are interacting with an AI-powered system. Once again, this is critical both for in terms of trustworthiness, and in ensuring adequate engagement of the end-users with the AI’s prediction.

Additionally, the actual content provided to the end-user is key. As mentioned in the European Commission guidelines, end-users “should be given the knowledge and tools to comprehend and interact with AI systems to a satisfactory degree.” This directly implies a requirement for transparency of model decisions, which is another one of the seven pillars of trustworthy AI.

Systematically providing explainability tools along with any AI-powered recommendation is critical both to build the users’ confidence in an AI system, and to make the recommendations directly actionable.

Finally, human agency and oversight implies that the end-user of AI predictions must also be empowered to “self-assess or challenge the system,” following the language of the European Commission guidelines. This implies the design of workflows, in existing CRMs or similar applications, where the user will have the final say on what actions to take. This can of course be combined with a process whereby the user will provide feedback which can then be used by the designer of the AI system to improve either the modeling itself or the justifications provided along with the results.


In order to ensure both their trustworthiness and their efficiency, AI systems should be directly integrated with existing business workflows, and enough information must be provided to end-users to provide adequate interaction with the AI system.


[1] European Commission High-Level Expert Group on AI, Ethics guidelines for trustworthy AI (2019)

[2] Gartner, Lessons From Early AI Projects (2017)

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.