AI in general 1, 2018

Draft Ethics Guidelines for Trustworthy AI

European Union


High-Level Expert Group on Artificial Intelligence


Accountability, Bias, Transparency, Human control, Explainability, Data quality, Fairness, Safety, Diversity, Privacy, Inclusion, Testing, Validation


The Draft Ethics Guidelines for Trustworthy AI were produced by the European Commission’s High-Level Expert Group on Artificial Intelligence. The group is comprised of 52 representatives from academia, civil society and industry, and it was put together through an open selection process. The Group had been tasked with creating ethics guidelines  as well as policy and investment recommendations. The present document is a draft of the first deliverable, a final version of which is scheduled for March 2019. The second deliverable is due in May 2019.
The report describes AI as “one of the most transformative forces of our time” that has the potential to create enormous societal benefit while harbouring significant risks. In order to minimise the risks and maximise the benefits, the Group proposes the notion of human-centric, Trustworthy AI, which is built on two fundamental components: a strong “ethical purpose”, and robust technical implementation. According to the report, trust is an absolute prerequisite for developing AI for the benefit of individuals and societies. Trust is required towards 3 interdependent layers, the technology, the rules, laws and norms governing the technology, as well as the business and public governance models of various AI providers.
The Guidelines offer a complete framework for Trustworthy AI, unfolded in 3 increasingly concrete steps. First, the basic ethical principles are outlined, followed by guidance on their realisation, covering both technical and non-technical aspects, and finally, an assessment list is provided to ensure operationalisation. The document’s intended audience includes all relevant stakeholders, both private and public. The guidelines are not intended to stand in place of regulation or policy-making but are intended to inform such efforts. While the guidelines offer general principles, the report stresses that it is necessary to tailor them to the specific context of application.
Starting with “ethical purpose”, the guidelines emphasise that the EU’s rights’ based approach should be followed. According to this, the EU Treaties and the Charter of Fundamental Rights provide the basis for defining the ethical principles, which are then operationalised. The rights to be respected are dignity, freedoms, equality and solidarity, citizens’ rights and justice. These rights together constitute the EU’s human-centric approach, which must be upheld in all law-making. This approach also ensures that an ethical lens is applied when evaluating how technology should be used, rather than just considering how it could be used. A further advantage of this approach is that it reduces regulatory uncertainty because clear norms exist. In its adoption of a fundamental rights’ framework, the HLEG follows in the footsteps of the Oviedo Convention, which took a similar approach in relation to medical technologies, upholding the “primacy of the human being.” 
It is this respect for fundamental rights, which constitutes the “ethical purpose” that is one of the two key elements of Trustworthy AI.
Drawing on a review of other ethics framework that took a fundamental rights’ approach conducted by AI4People, the HLEG lists 5 overarching principles that must be observed to ensure human-centric AI.
  • The Principle of Beneficence / “Do Good” - AI systems should be deployed towards the collective good and the resolution of the world’s challenges

  • [/ul]
  • The Principle of Non-maleficence/ “Do No Harm” - The report takes a very comprehensive view of harms when stating that AI systems should not harm humans. Included are physical, psychological, financial and social harms. This extends to the protection from “ideological polarization and algorithmic determinism.” Vulnerable groups should receive increased protections and there is also a requirement to pursue AI in an environmentally friendly way.

  • [/ul]
  • The Principle of Autonomy / “Preserve Human Agency” - “Human beings interacting with AI systems must keep full and effective self-determination over themselves.” This includes the right to decide whether to be subject to AI-based decision-making. Vulnerable groups might require government or non-government support to ensure self-determination, and adequate measures must be in place to guarantee the accountability of AI systems.

  • [/ul]
  • The Principle of Justice / “Be Fair” - This includes ensuring that AI is bias-free and that its benefits and harms are distributed evenly across society. Redress must be provided to those who suffer harms as a result of AI, and developers must be held to high standards of accountability.

  • [/ul]
  • The Principle of Explicability / “Operate transparently” - Technological and business models must both be sufficiently transparent to enable citizen’s trust. Technologies must be auditable and intelligible to humans of various levels of expertise, while business model transparency implies that users are informed of the intentions of platform implementers. The report also stresses the importance of informed consent, which is predicated on explicability and also calls for accountability measures to be put in place.

  • [/ul]
    There is no stated hierarchy among these principles and they may sometimes be in conflict with each other when viewed from different perspectives. No fixed mechanism exists for the resolution of these conflicts or for the ordering of the principles. Therefore, the guidelines suggest that developers should rely on ethical expertise when designing, developing and deploying AI systems.
    The report goes on to briefly discuss a few specific applications of AI that might raise special concerns, such as AI-based identification without consent, covert AI systems, mass citizen scoring, lethal autonomous weapon systems, and Artificial General Intelligence. The public’s input is solicited on these issues as the HLEG could not reach consensus, however, the report expresses support for the EU Parliament’s resolution calling for a legally binding instrument prohibiting lethal autonomous weapon systems.
    Moving on from the abstract ethical principles to the realisation of Trustworthy AI. The guidelines list 10 equally important requirements derived from the previously discussed rights and principles:

    1. Accountability

    2. Data Governance

    3. Design for all

    4. Governance of AI Autonomy (Human oversight)

    5. Non-Discrimination

    6. Respect for (& Enhancement of) Human Autonomy

    7. Respect for Privacy

    8. Robustness

    9. Safety

    10. Transparency

    The guidelines propose both technical and non-technical measures to implement these requirements. There should be an ongoing assessment of the degree to which the requirements are fulfilled. Given that AI systems themselves continuously change, achieving Trustworthy AI is also a continuous process.
    Technical methods include:
  • Ethics and rule of law by design - this entails incorporating compliance with legal and ethical norms into the design process, such as Privacy-by-design and Security-by-design. This requires developers to identify the likely impacts of their systems from early on.

  • [/ul]
  • Architectures for Trustworthy AI - entails formulating rules or behaviour boundaries that ensure that system acts in line with ethical principles.

  • [/ul]
  • Testing & validating - should begin as early as possible and be iterative to ensure the systems behaves as intended. Testing should include all inputs to the system, including pre-trained models, and it should be undertaken by a diverse group of people using multiple metrics.

  • [/ul]
  • Traceability and auditability - decisions at every step of the design process should be documented to ensure transparency and explainability. Human-machine interfaces that assist in understanding the causality of algorithmic decision-making, as well as internal and external auditors can help ensure explainability and further trust.

  • [/ul]
  • Explanation (XAI research) - can help address the opacity of neural nets and make their operation more semantically transparent

  • [/ul]
    Non-technical methods include should also be evaluated on an ongoing basis and include:
  • Regulation to ensure trust and guarantee redress when harms occur;

  • Standardization can act as instruments of quality management;

  • Accountability governance through internal or external methods, such as appointing an ethics panel to provide oversight and advice;

  • Codes of conduct and the adaptation of internal KPIs to reflect a commitment to Trustworthy AI

  • Education and awareness to foster an ethical mind-set, which requires properly trained ethicists in this space;

  • Stakeholder and social dialogue on the use and impact of AI to help review results and approaches and spread awareness of AIs benefits and risks;

  • Diversity and inclusive design teams to ensure the consideration of different perspectives.

  • [/ul]
    Finally, the guidelines offer a preliminary assessment list to help operationalist the implementation of the requirements outlined earlier. The list proposes questions that the development team should consult at every stage of the design process. In the draft version of the guidelines, public input is sought on the questions and it is expected that the final guidelines will consider several use-cases to illustrate how the assessment list can be applied. It is emphasised that the assessment is an ongoing process and that it needs to be tailored to the particular application and context at hand.
    Assessment is envisaged as a circular process where assessment is never conclusive. The model will include metrics associated with each question, and specific measures to ensure Trustworthy AI.

    AI Governance

    This database is an information resource about global governance activities related to artificial intelligence. It is designed as a tool to help researchers, innovators, and policymakers better understand how AI governance is developing around the world.

    We're collecting more examples as we find them. If you'd like to add yours to this map, please suggest an entry.

    Suggest an entry
    • Filter by:
    • Current filters:
    256 entries are listed below
    • Sort by: