Scoop has an Ethical Paywall
Licence needed for work use Learn More

News Video | Policy | GPs | Hospitals | Medical | Mental Health | Welfare | Search

 

We Should Register And Robustly Evaluate AI Used Across Government Agencies

As the use of artificial intelligence (AI) grows, particularly with the introduction of “generative” tools like ChatGPT, experts are calling for transparency in their use in Aotearoa New Zealand government agencies, especially healthcare.

In the latest Public Health Communication Centre Briefing, Victoria University of Wellington Professor of Artificial Intelligence, Ali Knott and co-authors examine the latest reports on AI and look at what is needed to ensure its safe and transparent use as it is increasingly employed across the public sector.

Professor Knott stresses the necessity of evaluations of AI systems to find out how well they work. “We need to know how often it is right. Does it work as well as a person performing the same task? Does it work equally well for all demographic groups? Is it fair and equitable?”

He says the public also has a right to know how well healthcare AI systems perform. “Performance information should provide an understanding of what level of human oversight is required for a given tool, and what expectations about human involvement there should be.”

Bias is a factor that must be considered when designing and evaluating AI in healthcare according to fellow author, AI and technology ethicist Dr Karaitiana Taiuru. He says Māori experience significant inequities and bias in health compared to the non-Māori population. He recommends all new AI teams be diverse and account for the introduction of bias at each stage of the AI lifecycle. "Addressing questions of performance and bias is crucial for public sector AI systems, where open government principles demand high transparency," he says.

Advertisement - scroll to continue reading

The Briefing outlines two types of AI systems: "predictive models," which are trained to perform specific tasks, and "generative models," like ChatGPT, which can be applied to a range of tasks they were not specifically trained for, such as answering questions or translating. Professor Knott emphasises that research on evaluating generative AI systems is still evolving, so their use—particularly in healthcare—should be approached with caution.

Along with robust evaluations, Professor Knott says there should be a regularly updated register of all AI algorithms used by government agencies including reporting on their performance and impact on equity. “As AI technologies continue to evolve and become more prevalent in healthcare and other public services, the time has come to implement the register of AI algorithms and ensure rigorous evaluations. This will foster transparency, build public trust, and ensure that AI systems are safe, reliable, and equitable,” he says.

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
Culture Headlines | Health Headlines | Education Headlines

 
 
 
 
 
 
 

LATEST HEADLINES

  • CULTURE
  • HEALTH
  • EDUCATION
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.