AI in general 1, 2018

Toronto Declaration

Global

Actors

Amnesty International, Access Now

Tags

Equality, Fairness, Bias, Transparency, Accountability, Non-discrimination,

Resources


The Declaration was published on 16 May 2018 by Amnesty International and Access Now, and launched at RightsCon 2018 in Toronto, Canada. Since then, Human Rights Watch and the Wikimedia Foundation have also endorsed it.
The Declaration underlines the importance of relying on the framework of existing human rights laws and standards to guard individuals and communities against intentional or inadvertent discrimination arising from the use of machine learning systems. It calls on states and private entities to respect human rights at all times, and to have binding measures to uphold and protect these rights. Human rights law is a body of universally binding and actionable measures that is well suited to guide borderless technologies like AI and to ensure accountability. Importantly, a human rights framework includes mechanisms for holding actors accountable and obtaining redress when a person’s rights have been violated.
Public and private entities must work to prevent and mitigate the risk of discrimination from the design phase of an application through to its deployment. Inclusion and diversity in terms of race, culture, gender, religion and socio-economic backgrounds is essential to ensure non-discrimination and to eliminate biases.
States have a particular obligation to uphold and protect human rights through binding legal instruments. States must have updated measures to guard against discrimination and other rights-harms arising from machine learning and provide meaningful redress of harms. 
Specifically, state actors must take steps with regard to their own use of machine learning:
[ul]
  • to identify risks of discrimination and other rights-harms that might arise from machine learning systems. This extends throughout the technology’s entire life cycle. They must conduct regular impact assessments and prepare measures to mitigate the risks identified through such an assessment. Systems must be subjected to regular independent review and testing for bias, and any known limitations of systems must be disclosed.

  • To ensure and require transparency and accountability, allowing for public scrutiny of the impact of the machine learning systems. They must publicly disclose their use of ML systems, explaining the mechanisms of decision-making and allowing independent analysis of their algorithms

  • To enforce oversight by including diverse perspective in the design, implementation and review of ML systems, as well as training providing human rights and data analysis training to those involved in their procurement, development and review. ML-supported decisions must meet international norms on due process.

  • [/ul]
    When procuring ML systems from private entities, state authorities must maintain their oversight and ensure third-party audit for human rights due diligence. 
    Finally, states should craft regulations and standards to prevent private entities from using ML in ways that are discriminatory or cause other rights-harms. 
    The Declaration also articulates the responsibilities of private sector actors. First and foremost, they should follow a human rights due diligence framework, making sure that they don’t cause or contribute to any human rights violations. Similar to state authorities, private actors must work to identify any risks associated with their use of an ML system, work to mitigate those risks and be transparent about their efforts. Where the risks of discrimination or other human rights violations are impossible to mitigate, they should refrain from deploying the technology.
    The provision of effective remedies against violations is a fundamental element of the human rights framework that applies to both state and private sector actors. Particular care should be taken when using ML-based systems in the justice sector, as this might affect individuals’ ability to seek effective remedy. Clear lines of accountability must be defined outlining who is legally responsible for decisions made with the help of ML systems.

    AI Governance
    Database

    This database is an information resource about global governance activities related to artificial intelligence. It is designed as a tool to help researchers, innovators, and policymakers better understand how AI governance is developing around the world.

    We're collecting more examples as we find them. If you'd like to add yours to this map, please suggest an entry.

    Suggest an entry
    • Filter by:
    • Current filters:
    256 entries are listed below
    • Sort by: