New York City wanted to be sure that the AI and the algorithms that will be used to manage its programs would be unbiased. But the task force NYC formed to carry out this directive isn’t sure it can achieve its objective. Task force members say that the lack of validated information and the sheer complexity of the technology involved in the creation of the algorithms are the biggest stumbling blocks.

Shirin Ghaffary filed this report on NYC’s effort to tame AI for the common good in Recode:

Unlike a decision made by a human being, there’s often no way to appeal an incorrect decision made by an algorithm — because of the often inscrutable “black box” of logic that determines the AI’s analysis. And it’s not only agencies in the legal and criminal justice system that use these types of tools. According to a report released last year by the research group AI Now Institute, social welfare agencies use algorithms to decide which families should receive a secondary visit from a social worker. Housing agencies use them to prioritize who should receive temporary or permanent shelter. Health agencies use software from vendors like IBM to assess who should be eligible for Medicaid.

The New York City Council passed a law in 2017 that would create a special task force to investigate city agencies’ use of algorithms and deliver a report with recommendations. Many applauded the move as a rare example of politicians getting ahead of technology’s impact on society rather than scrambling to grapple with its consequences.

But now, a year and a half later, several members of the task force and outside experts such as representatives from AI Now and the Data & Society Research Institute say they’re worried the group won’t be able to see its mission through. They cite a lack of information about how exactly the city uses these algorithms, many of which are still shrouded in secrecy.

These experts also point to the lack of public input from people whose day-to-day lives are potentially harmed by the use of algorithms. The concern about the once-promising project is a sign that even when a government puts effort behind regulating new technology like AI, the implementation can prove too complicated to handle. It’s of further concern that many of the algorithms are owned and run by private companies including Microsoft, Amazon, and IBM, and exactly how they’re baked can be protected as trade secrets. But without the information it needs, the task force is stalled in its analysis.