Automated Decision Systems 1, 2016

Principles for Accountable Algorithms and a Social Impact Statement for Algorithms

Global

Actors

FAT/ML

Tags

Accountability, Impact assessment, Fairness, Explainability, Auditing,

Resources


FAT/ML is an annual conference launched in 2014 to address issues of fairness, accountability and transparency in machine learning. The principles were developed as part of the Dagstuhl seminar, “Data, Responsibly”.
The document has a clear and direct premise that the editors believe should serve as the foundational ethical ideal for designing and implementing algorithmic systems in “publicly accountable ways.” This premise is the following:
“Algorithms and the data that drive them are designed and created by people -- There is always a human ultimately responsible for decisions made or informed by an algorithm. "The algorithm did it" is not an acceptable excuse if algorithmic systems make mistakes or have undesired consequences, including from machine-learning processes.”
From the declaration of ultimate human responsibility, equally important other principles are derived, the implementation of which could help create accountability in the development of algorithmic systems: 1) responsibility; 2) explainability; 3) accuracy; 4) auditability; 5) fairness.
Adhering to these principles should contribute to the mitigation of negative social impacts and to the establishment of an “obligation to report, explain, or justify” algorithmic decision-making, both being declared to be constitutive parts of accountability within the field of designing and implementing algorithmic systems.
The definition of these five principles are internationally very concise, all of them being no longer than just one sentence. The idea behind this was “to allow these principles to be broadly applicable.” Their implementation, however, should always be case and context specific.
The document includes a call to action targeting algorithm creators, inviting them to lay down a “Social Impact Statement” based on these five principles. The Statement should be “revisited and reassessed” at least three times: during the design stage, during the pre-launch stage, and during the post-launch phase. The Statement should answer 27 questions stemming from the five principles.

AI Governance
Database

This database is an information resource about global governance activities related to artificial intelligence. It is designed as a tool to help researchers, innovators, and policymakers better understand how AI governance is developing around the world.

We're collecting more examples as we find them. If you'd like to add yours to this map, please suggest an entry.

Suggest an entry
  • Filter by:
  • Current filters:
256 entries are listed below
  • Sort by: