Our world is becoming increasingly algorithm-driven with each passing day. It’s not for nothing: there is ample evidence that they make the appropriate decisions at great speed. They consistently outperform fallible humans in many tasks.

But what about when they don’t? Who is to blame? How do you hold an algorithm accountable when you may not even know why it made the decision it did.

Accountability for the Black Box?

Algorithms, like people, should be accountable for the mistakes they make.

But we don’t always understand why an algorithm behaves the way it does. What kind of reasoning informs its decisions? How do the various inputs affect the decisions it puts out?

New York City Mayor Bill de Blasio is organizing a committee to evaluate and regulate “Automated Decision Systems” already in place in New York City’s administrative systems. These include systems that control police activities; student –school matching programs; and other AI-driven resource allocation systems. There is still no accepted procedure on how public agencies using these automated decision systems keep them accountable and how citizens can challenge decisions made by these systems.

A June 2017 article from the Harvard Political Review addressed the black box problem:

To hold algorithms accountable, we must test them and have mechanisms to account for possible mistakes. Hemant Taneja of TechCrunch argued that tech companies “must proactively build algorithmic accountability into their systems, faithfully and transparently act as their own watchdogs or risk eventual onerous regulation.” However, we cannot be this optimistic about tech companies. It is unlikely that corporations will go out of their way to create accountability because there is no incentive to do so. Instead, regulators must push for procedural regularity, or the knowledge that each person will have the same algorithm applied to them. The procedure must not disadvantage any individual person specifically. This baseline draws on the Fourteenth Amendment principle of due process. Due process helps to elucidate why many argue that algorithms should be explainable to the parties affected. Guidelines should be set up to hold algorithms accountable based on auditability, fairness, accuracy, and explainability.

Algorithmic Impact Assessments

One of the suggestions put forward is to institute Algorithmic Impact Assessments (AIA). In a Medium post, the AI Now Institute expounds:

AIAs strive to achieve four initial goals:

  1. Respect the public’s right to know which systems impact their lives and how they do so by publicly listing and describing algorithmic systems used to make significant decisions affecting identifiable individuals or groups, including their purpose, reach, and potential public impact;
  2. Ensure greater accountability of algorithmic systems by providing a meaningful and ongoing opportunity for external researchers to review, audit, and assess these systems using methods that allow them to identify and detect problems;
  3. Increase public agencies’ internal expertise and capacity to evaluate the systems they procure, so that they can anticipate issues that might raise concerns, such as disparate impacts or due process violations; and
  4. Ensure that the public has a meaningful opportunity to respond to and, if necessary, dispute an agency’s approach to algorithmic accountability. Instilling public trust in government agencies is crucial — if the AIA doesn’t adequately address public concerns, then the agency must be challenged to do better.

Read the full post here.