The Times Australia
Google AI
The Times Australia
.

Most Australian government agencies aren’t transparent about how they use AI

  • Written by José-Miguel Bello y Villarino, Senior Research Fellow, Sydney Law School, University of Sydney

A year ago, the Commonwealth government established a policy requiring most federal agencies to publish “AI transparency statements” on their websites by February 2025[1]. These statements were meant to explain how agencies use artificial intelligence (AI), in what domains and with what safeguards.

The stated goal[2] was to build public trust in government use of AI – without resorting to legislation. Six months after the deadline, early results from our research (to be published in full later this year) suggest this policy is not working.

We looked at 224 agencies and found only 29 had easily identifiable AI transparency statements. A deeper search found 101 links to statements.

That adds up to a compliance rate of around 45%, although for some agencies (such as defence, intelligence and corporate agencies) publishing a statement is recommended rather than required, and it is possible some agencies could share the same statement. Still, these tentative early findings raise serious questions about the effectiveness of Australia’s “soft-touch” approach to AI governance in the public sector.

Why AI transparency matters

Public trust in AI in Australia is already low[3]. The Commonwealth’s reluctance to legislate rules and safeguards for the use of automated decision making in the public sector – identified as a shortcoming by the Robodebt royal commission[4] – makes transparency all the more critical.

The public expects[5] government to be an exemplar of responsible AI use. Yet the very policy designed to ensure transparency seems to be ignored by many agencies.

With the government also signalling a reluctance[6] to pass economy-wide AI rules, good practice in government could also encourage action from a disoriented private sector. A recent study[7] found 78% of corporations are “aware” of responsible AI practices, but only 29% have actually “implemented” them.

Transparency statements

The transparency statement requirement is the key binding obligation under the Digital Transformation Agency’s policy[8] for the responsible use of AI in government.

Agencies must also appoint an “accountable [AI] official” who is meant to be responsible for AI use. The transparency statements are supposed to be clear, consistent, and easy to find – ideally linked from the agency’s homepage.

In our research, conducted in collaboration with the Office of the Australian Information Commissioner[9], we sought to identify these statements, using a combination of automated combing through websites, targeted Google searches, and manual inspection of the list of federal entities facilitated by the information commissioner. This included both agencies and departments strictly bound by the policy and those invited to comply voluntarily.

But we found only a few statements were accessible from the agency’s landing page. Many were buried deep in subdomains or required complex manual searching. Among agencies for which publishing a statement was recommended, rather than required, we struggled to find any.

More concerningly, there were many for which we could not find the statement even where it was required. This may just be a technical failure, but given the effort we put in, it suggests a policy failure.

A toothless requirement

The transparency statement requirement[10] is binding in theory but toothless in practice. There are no penalties for agencies that fail to comply. There is also no open central register to track who has or has not published a statement.

The result is a fragmented, inconsistent landscape that undermines the very trust the policy was meant to build. And the public has no way to understand – or challenge – how AI is being used in decisions that affect their lives.

How other countries do it

In the United Kingdom, the government established a mandatory AI register. But as the Guardian[11] reported in late 2024, many departments failed to list their AI use, despite the legal requirement to do so.

The situation seems to have slightly improved this year, but still many high-risk AI systems identified by UK civil society groups[12] are still not published on the UK government’s own register.

The United States has taken a firmer stance. Despite anti-regulation rhetoric from the White House, the government has so far maintained its binding commitments[13] to AI transparency and mitigation of risk.

Federal agencies are required to assess and publicly register their AI systems. If they fail to do so, the rules say they must stop using them.

Towards responsible use of AI

In the next phase of our research, we will analyse the content of the transparency statements we did find.

Are they meaningful? Do they disclose risks, safeguards and governance structures? Or are they vague and perfunctory? Early indications suggest wide variation in quality.

If governments are serious about responsible AI, they must enforce their own policies. If determined university researchers cannot easily find the statements – even assuming they are somewhere deep on the website – that cannot be called transparency.

The authors wish to thank Shuxuan (Annie) Luo for her contribution to this research.

References

  1. ^ by February 2025 (www.digital.gov.au)
  2. ^ stated goal (www.digital.gov.au)
  3. ^ is already low (mbs.edu)
  4. ^ shortcoming by the Robodebt royal commission (robodebt.royalcommission.gov.au)
  5. ^ expects (www.digital.gov.au)
  6. ^ reluctance (ministers.treasury.gov.au)
  7. ^ recent study (www.fifthquadrant.com.au)
  8. ^ policy (www.digital.gov.au)
  9. ^ Office of the Australian Information Commissioner (www.oaic.gov.au)
  10. ^ transparency statement requirement (www.digital.gov.au)
  11. ^ the Guardian (www.theguardian.com)
  12. ^ identified by UK civil society groups (publiclawproject.org.uk)
  13. ^ binding commitments (www.whitehouse.gov)

Read more https://theconversation.com/most-australian-government-agencies-arent-transparent-about-how-they-use-ai-266768

Big batteries are now outcompeting gas in the grid – and gas-rich Western Australia is at the forefront

Australia’s electricity grids are undergoing a profound transformation. Solar and wind have provided 99% of ne...

Times Magazine

IPECS Phone System in 2026: The Future of Smart Business Communication

By 2026, business communication is no longer just about making and receiving calls. It’s about speed...

With Nvidia’s second-best AI chips headed for China, the US shifts priorities from security to trade

This week, US President Donald Trump approved previously banned exports[1] of Nvidia’s powerful ...

Navman MiVue™ True 4K PRO Surround honest review

If you drive a car, you should have a dashcam. Need convincing? All I ask that you do is search fo...

Australia’s supercomputers are falling behind – and it’s hurting our ability to adapt to climate change

As Earth continues to warm, Australia faces some important decisions. For example, where shou...

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

Tim Ayres on the AI rollout’s looming ‘bumps and glitches’

The federal government released its National AI Strategy[1] this week, confirming it has dropped...

The Times Features

Sweeten Next Year’s Australia Day with Pure Maple Syrup

Are you on the lookout for some delicious recipes to indulge in with your family and friends this ...

Operation Christmas New Year

Operation Christmas New Year has begun with NSW Police stepping up visibility and cracking down ...

FOLLOW.ART Launches the Nexus Card as the Ultimate Creative-World Holiday Gift

For the holiday season, FOLLOW.ART introduces a new kind of gift for art lovers, cultural supporte...

Bailey Smith & Tammy Hembrow Reunite for Tinder Summer Peak Season

The duo reunite as friends to embrace 2026’s biggest dating trend  After a year of headlines, v...

There is no scientific evidence that consciousness or “souls” exist in other dimensions or universes

1. What science can currently say (and what it can’t) Consciousness in science Modern neurosci...

Brand Mentions are the new online content marketing sensation

In the dynamic world of digital marketing, the currency is attention, and the ultimate signal of t...

How Brand Mentions Have Become an Effective Online Marketing Option

For years, digital marketing revolved around a simple formula: pay for ads, drive clicks, measur...

Macquarie Capital Investment Propels Brennan's Next Phase of Growth and Sovereign Tech Leadership

Brennan, a leading Australian systems integrator, has secured a strategic investment from Macquari...

Will the ‘Scandinavian sleep method’ really help me sleep?

It begins with two people, one blanket, and two very different ideas of what’s a comfortable sle...