Designing, building and deploying trustworthy AI systems admin March 2, 2022

Designing, building and deploying trustworthy AI systems

While offering great opportunities, AI systems also give rise to certain risks that must be handled appropriately and proportionately. We now have an important window of opportunity to shape their development. We want to ensure that we can trust the sociotechnical environments in which they are embedded. We also want producers of AI systems to get a competitive advantage by embedding Trustworthy AI in their products and services. This entails seeking to maximise the benefits of AI systems while at the same time preventing and minimising their risks. [1]

In its Ethics guidelines for trustworthy AI [1], the European Commission high-level expert group propose a useful framework for the realization of “trustworthy AI.” The report identifies seven key — technical and non-technical — requirements:

  • Human agency and oversight: by systematically ensuring human oversight, and the guarantee that human decisions are being made autonomously when AI models are involved, AI systems should empower rather than alienate.
  • Technical robustness and safety: risk prevention must be considered when building AI systems, taking into account possible attacks against the system, and ensuring general safety and reliability.
  • Privacy and data governance: in accordance with the underlying principles behind the General Data Protection Regulation (GDPR), protecting user data — what data is collected and/or generated, who has access to this data, and how is it used — is an essential component of a trustworthy AI system.
  • Transparency: humans must know when they are interacting with an AI system, and the decisions made by the AI system must be both traceable and explainable.
  • Diversity, non-discrimination, and fairness: AI systems must avoid unfair bias (which can arise from either the data used or the modeling performed on this data) and be designed according to the relevant accessibility principles.
  • Societal and environmental wellbeing: designers of AI systems should thoroughly evaluate their social and environmental impact, as well as “their effect on institutions, democracy and society at large.”
  • Accountability: during the entire lifecycle of AI systems (from design to deployment and use), AI systems and the organizations responsible for them must be made accountable. This includes in particular a requirement for auditability of AI systems and all decisions made by these systems.

At this stage, this document from the European Commission is only a first step providing guidelines and a preliminary assessment method for evaluating trustworthy AI. However, we can expect these principles to guide upcoming regulations in financial services. Indeed, as an example, the Autorité de Contrôle Prudentiel et de Résolution (ACPR, the French banking regulator) has produced a discussion document [2] — the first step towards a new regulation — on the governance of AI models, which adapts several of these general principles to the specific needs of financial institutions. Similarly, regulators outside the EU are developing their own frameworks, such as the FEAT principles published by the Monetary Authority of Singapore [3].

In practice, for AI systems to respect these guidelines and principles, one must be careful at all steps in their design and life-cycle: data collection and selection, data preparation, model training and selection, designing and building explainability methods, deploying AI models and monitoring them in production, guaranteeing complete auditability at all steps, etc. In order to make the guidelines operational, DreamQuark has contributed to the collaborative effort led by Substra Foundation, to create an “Assessment Framework for Responsible and Trustworthy Data Science” [4].

This framework, based on the identification of risks associated with AI systems, provides an exhaustive set of best practices. In theory, for each new data science project initiated by an organization, all of these best practices should be evaluated, and implemented if necessary, depending on the specific use-case. Additionally, similarly to the ISO-27001 standard for IT security, a high level of training on the key issues around trustworthy AI is expected for all data science practitioners in an organization.

This high level of demand, in terms of both technical effort and employee training, makes building responsible AI systems extremely challenging for organizations that do not specialize in AI. One solution could be to entrust the building of AI systems to external service providers with this capability, but the organization would still need to check that, for every single AI system it deploys, all the requirements and best practices are correctly implemented.

Another possibility lies in the adoption of external AI tools, providing (i) automated implementations of technical best practices (such as explainability, auditability, fairness, robustness), ensuring that the underlying objectives are always met, regardless of the user building the model; and (ii) the necessary tools to enable the implementation by business users of the non-technical best practices (such as human oversight of AI models and accountability), which rely on processes rather than technical solutions.

Key takeaways
  • Standards for trustworthy AI systems are currently being defined and will soon be enforced by regulators in financial services
  • The standards lead to the definition of both technical and non-technical best practices, encompassing the entire lifecycle of AI systems
  • Adopting external AI tools can empower organizations to meet these stringent requirements
References

[1] European Commission High-Level Expert Group on AI, Ethics guidelines for trustworthy AI (2019)

[2] Autorité de Contrôle Prudentiel et de Résolution, Governance of Artificial Intelligence in Finance (2020)