Automated Decision Systems 1, 2019

Directive on the Use of Machine Learning for Decision-Making

Canada

Actors

Government of Canada

Tags

Accountability, Bias, Transparency, Fairness

Resources


The Directive will cover rules related to the automation of administrative decision processes in federal government institutions. It will apply to any Automated Decision System (ADS) developed or procured after 1 April 2020 and will be reviewed every 6 months once in effect on the 4th of February 2019. 
The Directive is expected to ensure that government departments make decisions in a way that is data-driven, responsible, and in accord with the requirements of procedural fairness and due process. Negative outcomes are to be reduced by assessing the impacts of algorithms on administrative decisions, and where possible, data about the use of ADSs are to be made publicly available.
The responsible person at the relevant government program using an ADS is required to undertake an Algorithmic Impact Assessment (AIA). This takes the form of a questionnaire, which is meant to help institutions understand and reduce the risks of ADSs. It uses a 4-level classification to rank the impacts of the decision on individuals’ fundamental rights, the health and well-being of individuals and communities, the economic interests of those affected, as well as on the ongoing sustainability of the ecosystem. Depending on which classification a system falls under, different peer review, notice, human-in-the-loop, explanation, testing, contingency planning and approval requirements must be adhered to. However, irrespective of the classification level, the same training and monitoring requirements apply. These state that before any ADS goes into production, processes must be in place to ensure that the training data is not biased and that the data used by the ADS is routinely tested to guarantee that it remains relevant, accurate and up-to-date. In addition, the systems’ use must be monitored for unintended outcomes and for compliance with legislation and the Directive.
Whenever possible, open source solutions should be used. If using a proprietary license the Government of Canada has the right to access and test the system, including the right to have third parties audit the ADS. Unless overriding reasons, such as national security, make it impossible, the ADS’ source code should be made publicly available.

AI Governance
Database

This database is an information resource about global governance activities related to artificial intelligence. It is designed as a tool to help researchers, innovators, and policymakers better understand how AI governance is developing around the world.

We're collecting more examples as we find them. If you'd like to add yours to this map, please suggest an entry.

Suggest an entry
  • Filter by:
  • Current filters:
256 entries are listed below
  • Sort by: