Public services 1, 2018

Algorithmic Impact Assessment - A Practical Framework for Public Agency Accountability

USA

Actors

AI Now Institute

Tags

accountability, trust, explainability, transparency, evaluation

Resources


Researchers at NYU’s AI Now Institute are proposing an Algorithmic Impact Assessment framework, designed to support governments, public agencies and other communities and stakeholders to assess claims made about automated decision systems and to determine whether their use is acceptable.

Key elements of the proposed framework are:
1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities;
2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time;
3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired;
4. Agencies should solicit public comments to clarify concerns and answer outstanding questions; and
5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct.

"An Algorithmic Impact Assessment, [..] gives both the agency and the public the opportunity to evaluate the adoption of an automated decision system before the agency has committed to its use. This allows the agency and the public to identify concerns that may need to be negotiated or otherwise addressed before a contract is signed."

Steps of an AIA:


  1. Establishing scope, defining "automated decision systems" in a way that is appropriate to the case at hand -- A challenge may arise around drawing the appropriate boundaries and accurately defining ADS. Reasonable example: “systems, tools, or statistical models used to measure or evaluate an individual criminal defendant’s risk of reoffending.”

  2. Public notice: Alerting communities about systems that may affect their lives: public disclosure of proposed and existing automated decision systems, including their purpose, reach, internal use policies, and potential impacts on communities or individuals. -- A challenge may arise due to company trade secrecy claims. Here AI Now recommends that "[a]t minimum, vendors should be contractually required by agencies to waive any proprietary or trade secrecy interest in information related to accountability, such as those surrounding testing, validation, and/or verification of system performance and disparate impact."

  3. Internal self-assessment: Agencies must evaluate how ADS systems might impact communities and how they can resolve any issues. This process should be standardized to ensure its comparability, and sufficiently detailed to allow external review/research. A non-technical summary should also be made available to enhance public trust and engagement. The evaluation should make clear how the overall impact of using any ADS will be net beneficial to the affected communities. Procedures for appealing decisions and for mitigating any undesirable effects should be articulated as well. This self-assessment builds agency expertise when buildign or commissing ADSs, including testing for biases. "The benefits of self assessments to public agencies go beyond algorithmic accountability: it encourages agencies to better manage their own technical systems and become leaders in the responsible integration of increasingly complex computational systems in governance." This practice would benefit vendors that prioritise fairness, accountability, and transparency, leading to a competitive advantage and contributing to more responsible practices overall. This expertise and standardisation will enhance transparency and accountability in public records requests. Challenge: how to assess cultural and societal impacts, especially those that target communities other than the dominant culture. How to prevent both allocative (groups denied access to resources/opportunities) and representational harms (reinforcing the subordination of a group)?  

  4. Meaningful access: allow researchers and auditors rapid and ongoing access to review systems on they are in place. Sometimes pre-deployment reviews may be necessary as well. The type and level of access required to do this will vary among agencies, systems and communities. This will likely include providing access to training data and or a record of past decisions. Research and auditing should be accountable to the public, inlcuding a public log of who has access. Affected communities should be free to nominate reviewers/researchers they trust represent their interests. Findings should be openly available and subject to peer review. Challenge: Funding! Will external reviewers and researchers be expected to review without compensation? Will internal auditors be captured by the incentives of the organisation? Legislative solutions might include funding an independent, government-wide oversight body, like an inspector general’s office, to support research, access, and community engagement. Community institutional review boards could be supported and funding set aside for the compensation of external auditors.

AI Governance
Database

This database is an information resource about global governance activities related to artificial intelligence. It is designed as a tool to help researchers, innovators, and policymakers better understand how AI governance is developing around the world.

We're collecting more examples as we find them. If you'd like to add yours to this map, please suggest an entry.

Suggest an entry
  • Filter by:
  • Current filters:
256 entries are listed below
  • Sort by: