Many industries have their own version of the same dilemma: When driverless cars kill their first pedestrian, who should be blamed? When that automated doctor fouls up your diagnosis, who do we point the finger at?

And when innocent people are sent to jail because some algorithm decides a person is a probable threat to society, who is responsible for this travesty of justice?

Is it the judge who incorporated the algorithm’s assessment into his ruling? The institution that allowed the algorithm into their courtroom? Or the creator of the algorithm itself? There must be accountability somewhere.

Artificial Intelligence (AI) and techniques like machine learning are of great utility to almost every aspect of human society. The main problem with these technologies is that they’re a black box process. The basis for its decisions can’t be explained.

Just recently, there was a troubling study by Dartmouth College professor Hany Farid and researcher Julia Dressel that concluded that COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm-based system for predicting recidivism now used in courts, was both inaccurate and biased.

There is a notion that because AI works with quantifiable numbers, it is inherently more accurate and fair. There is no denying the need for human oversight of AI development and deployment. Author and Professor John Kuprenas writes: “You don’t fully understand something until you quantify it. But you understand nothing at all if all you do is quantify.”

However, there is no putting back the AI genie into its bottle. It is much too useful for that. A much better option is the use of Algorithmic Impact Assessments (AIA) which demands that governments and companies design their AI systems with an eye towards fairness and accountability.

From Smithsonian Magazine:

“There is this emphasis on designers understanding a system. But it’s also about the people administering and implementing the system,” says Jason Schultz, a professor of law at New York University who works with the AI Now Institute on legal and policy issues. “That’s where the rubber meets the road in accountability. A government agency using AI has the most responsibility and they need to understand it, too. If you can’t understand the technology, you shouldn’t be able to use it.”

To that end, AI Now is promoting the use of “algorithmic impact assessments,” which would require public agencies to disclose the systems they’re using, and allow outside researchers to analyze them for potential problems. When it comes to police departments, some legal experts think it’s also important for them to clearly spell out how they’re using technology and be willing to share that with the local community.

“If these systems are designed from the standpoint of accountability, fairness and due process, the person implementing the system has to understand they have a responsibility,” Schultz says. “And when we design how we’re going to implement these, one of the first questions is ‘Where does this go in the police manual?’ If you’re not going to have this somewhere in the police manual, let’s take a step back, people.”

AIAs require that algorithms that affect human well-being be publicly disclosed and open to inspection by concerned sectors. These algorithms and their use should also be the subject of a public information campaign targeting the communities who will be most affected by them.

AI Now, an organization based in New York City, is among the leading groups studying the effects of algorithm use in the public sphere. It advocates the use of incentives that would reward ethical software developers and government employees who can evaluate risky software.