Nervous Australia considers restrictions on ‘high-risk’ AI

The Australian government will pursue stronger regulation for artificial intelligence (AI) systems in response to public concern over emerging technologies like ChatGPT.

The country’s Industry and Science Minister Ed Husic released the government’s interim response to a consultation on AI safety and responsibility on Wednesday (Jan.17).

Adopting AI could boost Australia’s GDP by up to $600 billion annually analysts have predicted, but surveys show only one-third of Aussies believe adequate safeguards currently exist.

The prosperous nation is one of the most nervous in the world about the rollout of artificial intelligence. Results of the 2023 ‘Global Advisor’ survey from pollsters Ipsos found Australians more wary of the technology than any other populace. Sixty-nine percent of them reported being concerned about AI.

Husic said: “Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled.

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.”

The paper defines “high-risk” AI as systems used to assess recidivism risk, job applicants or enable self-driving vehicles. Rapidly evolving “frontier AI” like ChatGPT is also singled out over its ability to generate content at scale.

What will Australia do to tackle AI growth?

While consultations on the topic continue, the Labor government has pledged to take three steps immediately:

  1. Working with industry to develop a voluntary AI Safety Standard;
  2. Work with industry to develop options for voluntary labeling and watermarking of AI-generated materials;
  3. Establish an expert advisory group to support the development of options for mandatory guardrails.

Boosting transparency is another key focus of the proposals. Public reporting on what data trains AI models is one idea aimed at increasing public understanding of large language models (LLMs) like ChatGPT. The government will also work with industry around voluntarily watermarking or labeling AI-generated content. This adds to existing government work on harmful AI material and AI use in schools.

Submissions raised legal concerns about using AI for deepfakes or healthcare privacy breaches. Reviews are underway on whether training generative AI constitutes copyright infringement. Citing disquiet from content creators, the paper highlights how models like Dall-E 2 are powered by scraping images and text without permission.

“We want safe and responsible thinking baked in early as AI is designed, developed and deployed,” said Minister Husic.

Featured Image: DALL-E

Sam Shedden

Managing Editor

Sam Shedden is an experienced journalist and editor with over a decade of experience in online news.

A seasoned technology writer and content strategist, he has contributed to many UK regional and national publications including The Scotsman, inews.co.uk, nationalworld.com, Edinburgh Evening News, The Daily Record and more.

Sam has written and edited content for audiences whose interests include media, technology, AI, start-ups and innovation. He’s also produced and set-up email newsletters in numerous specialist topics in previous roles and his work on newsletters saw him nominated as Newsletter Hero Of The Year at the UK’s Publisher Newsletter Awards 2023.

He has worked in roles focused on growing reader revenue and loyalty at one of the UK’s leading news publishers, National World plc
growing quality, profitable news sites. He has given industry talks and presentations sharing his experience growing digital audiences to international audiences.

Now a Managing Editor at Readwrite.com, Sam is involved in all aspects of the site’s news operation including commissioning, fact-checking, editing and content planning.

Read More

Sam Shedden