AI in general 1, 2018

Universal Guidelines for Artificial Intelligence

Global

Actors

The Public Voice, Electronic Privacy Information Centre

Tags

Accountability, Transparency, Bias, Fairness, Data quality, Security, Human control,

Resources


The guidelines were developed by The Public Voice, a coalition of the Electronic Privacy Information Centre, a Washington DC-based research centre. The guidelines are proposed to inform the design and use of AI systems with a focus on protecting human rights. They were presented at the 2018 International Data Protection and Privacy Commissioners Conference in Brussels. 
The document puts forward 12 principles for governments and private companies, which should be incorporated into ethical standards, legal frameworks, and embodied in AI systems. The principles build on several existing ethical frameworks, laws and conventions, such as the Asilomar Principles, the GDPR, the Toronto Declaration and the framework of fundamental human rights. The guidelines lays down the following rights of individuals and obligations of institutions:

  1. “Right to Transparency.  All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome.

  2. Right to Human Determination. All individuals have the right to a final determination made by a person.

  3. Identification Obligation. The institution responsible for an AI system must be made known to the public.

  4. Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.

  5. Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system.

  6. Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions.

  7. Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms.

  8. Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls.

  9. Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats.

  10. Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system.

  11. Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents.

  12. Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible.”

AI Governance
Database

This database is an information resource about global governance activities related to artificial intelligence. It is designed as a tool to help researchers, innovators, and policymakers better understand how AI governance is developing around the world.

We're collecting more examples as we find them. If you'd like to add yours to this map, please suggest an entry.

Suggest an entry
  • Filter by:
  • Current filters:
256 entries are listed below
  • Sort by: