Government algorithm charter ‘okay, we can do better’ – review

0
27

The first year of operation of the government’s algorithm charter has just been evaluated.
Photo: Delivered

Not all government agencies are sure whether the algorithms they increasingly use to make critical decisions are biased or not.

There is also little way for people to challenge decisions made about them by powerful public sector algorithms.

These are some of the findings of a review of the functioning of the government’s algorithm charter in its first year.

There is no action plan to pick up on the findings and data ethics policy options are still at an early stage, according to a response from the Civil Service Act.

Algorithms are calculating mathematical creations that solve problems. They spread across government and industry, for example being used for hiring decisions, or by courts to determine the risk someone poses; Netflix uses algorithms to match your preferences; and doctors to diagnose a disease or predict the next pandemic.

All in all, the new review scores the first year of the charter: ‘OK, it could be better’.

The 28 agencies that signed up — from Oranga Tamariki to Police and Education — told reviewers they like the charter’s intent, though in practice it’s a bit vague and they often don’t have the resources to follow it properly.

See also  Search underway after Napier shooting, in which one person is in serious condition

“Most agencies believe that there is a gap between the charter’s high-level principles and concrete practice to meet each of the commitments.”

‘Measuring bias’

For example, “measuring bias and ensuring appropriate human oversight of algorithms is not something all agencies have expertise in,” the review said.

Some struggled to “balance between different kinds of prejudice,” it said.

“Most agencies have capacity shortages to critically evaluate solutions that can support bias management, transparency and effective human oversight.”

Another issue: “Right now there is very little opportunity for New Zealanders to seek individual redress for decisions made about them that have been informed by an algorithm,” the review reads.

In addition, compliance was very light-hearted.

“Maybe a little more enforcement is needed to keep the social license.”

This seems contrary to the OECD principles on artificial intelligence that say how systems should be disclosed so that people can challenge them.

The local review suggests setting up a government-wide registry to let the public know which algorithms are doing what.

Some agencies don’t know themselves – they were “just at the beginning of this process, or have decided to focus on a select few examples”.

Others had made a full inventory; some had published their algorithms online, motivated by the charter.

The review emphasizes the need to gain public trust. “Public awareness of its use by government agencies is limited.”

See also  Investor Engagement With KiwiSaver Shocked, Not Broken - Survey

Ironically, however, the review itself has not been widely published.

‘Discouraging Questions’

The charter seeks to get agencies to balance privacy and transparency, avoid bias and reflect Te Tiriti.

Algorithms are embedded in some of New Zealand’s largest government agencies that sit on some of the most sensitive data about people.

They “reveal insights that cannot be easily revealed by human analysis alone,” the government public relations said.

“These algorithms could be used to help the government better understand New Zealand and New Zealanders.”

The mysterious way “black box” algorithms work — so called because the scientists themselves don’t know how they make their decisions — has sparked fears about where medical artificial intelligence might eventually take us.

The technology surpasses research into the dangers and benefits of public algorithms, which is still in its infancy.

“Algorithmic systems in the public sector raise daunting questions about how governments can ensure transparency, accountability and control over their own systems,” according to a 2021 study.

‘Lack of clear oversight body’

New Zealand was one of the first to have an algorithm charter; the UK has proclaimed its own algorithm standard this year.

See also  Christchurch City Council votes against implementation of government housing plan

Australia has been criticized for soft-pedaling algorithm abuse, although regulators are now taking steps there.

Even individual cities like New York have started experimenting with mandatory transparency.

But the local review notes that New Zealanders don’t resort to challenging algorithmic results, but have no say in what to do about it.

On the issue of bias, it recommends deploying more resources and a guide to evaluating software.

Agencies, it found, often flew blind.

“Agency noted a lack of a clear oversight body” and resorted to oversight.

It recommended setting up a supervisory authority.

Some agencies suggested non-binding audits.

Direct ban

The review said it might be helpful to ban some algorithms outright, as Europe had done with AI systems used for “arbitrary surveillance”.

The 28 signatories told reviewers they were in too great a vacuum.

“Most agencies have largely tackled their charter obligations alone and without knowing how other agencies were doing.”

Stats NZ was now evaluating its own “gaps and problems” with algorithms and would share them with other agencies, according to a response from the OIA.

A briefing to the Secretary of State for Statistics David Clark asked him to note that there is widespread support but insufficient resources to “achieve the desired shifts in the ethical use of algorithms”.

.

LEAVE A REPLY

Please enter your comment!
Please enter your name here