
Can We Trust AI to Serve the Public?
Scroll
Article summary
- Despite AI’s potential to revolutionise public services in Australia, public distrust persists due to past failures
like Robodebt, lack of transparency and concerns over biased or unexplainable systems. - Experts warn that AI without clear safeguards and human oversight risks undermining democratic legitimacy, especially when used in sensitive areas like welfare, healthcare, and immigration.
- Building trust requires the Australian Government to invest in transparent, inclusive, and locally grounded AI development.
By Alexi Freeman
Would you hand over your data to a government chatbot without knowing who built it or how it was trained?
Most Australians wouldn’t buy a car without popping the bonnet – so why trust AI in public services if we can’t see what’s driving it or where it’s heading?
Artificial Intelligence has rapidly emerged as a disruptive force in our technological ecosystem – potentially the most disruptive since the advent of fire.
While computer science pioneers like the ‘Godfather of AI,’ Geoffrey Hinton, warn that authoritarian regimes are using the technology to erode civil liberties and spread disinformation, democracies like Australia are proceeding more cautiously.
But caution alone won’t close the trust gap.
Even if the latest AI developments – AI agents or ‘Agentic AI’ (virtual co-workers, automated assessors, and planning assistants) – are technically ready, they won’t pass the pub test without transparency, fairness, and robust human oversight.
Water-cooler conversations have quickly shifted from whether governments can use AI to whether they should.
Regardless, they will – and the sooner the Australian Government invests meaningfully in safeguarding the rollout, the better off we’ll be.
The Trust Gap
From social services to transport modelling, AI is being explored across the Australian Public Service for its potential to revolutionise productivity and responsiveness.
AI promises radically faster and fairer delivery – reducing wait times for aged care and disability support from years to days – a lifeline for our most vulnerable.
But concerns over data ethics, opaque decision-making, and algorithmic bias are fueling public and policymaker hesitation.
In 2024, research from the University of Queensland found that 80% of Australians want catastrophic AI risks to be globally prioritised, on par with pandemics and nuclear war.
A follow-up study by KPMG found 3 in 5 respondents were “ambivalent or unwilling to trust AI” on safety, security and fairness.
The trust gap is less a technology problem than a legitimacy challenge.
In democracies, legitimacy is built through transparency, accountability, and human-centred design – areas successive governments have struggled with – particularly when it comes to emerging technologies.
Ironically, with the right safeguards, AI could greatly enhance government legitimacy – improving service delivery, accelerating decision-making, and promoting equity.
But unless the public feels in control of how their data is used, trust may remain elusive.
Still, trust is inching forward.
A 2023 government report found 61% of Australians trust public services and believe they will adapt to meet future needs.
Whether that future includes sufficient investment in safety and inclusivity remains an open (AI) question.
Robo-gate: Once Bitten, Twice Shy
The 2022 Robodebt Royal Commission revealed over 500,000 Australians were impacted by a flawed debt recovery algorithm, causing emotional distress, financial hardship and even loss of life.
While not an AI system, Robodebt’s legacy casts a long shadow over the public psyche – a powerful reminder of what happens when automation is deployed without responsible governance, oversight and human empathy.
Trust is hard-won, easily lost, and Robodebt’s psychological wounds are still raw.
The Australian Government urgently needs to roll up its digital sleeves and bring its AI protocols up to speed with other Western democracies.
What’s Driving the Hesitation
“AI is unique as a technology in that we no longer understand exactly how it does what it does,” says Melbourne-based AI scientist Matt Kuperholz.
“If you’ve created a tool so complex you don’t understand it, it’s difficult to control – and difficult to trust.”
Government hesitation isn’t merely political – it’s structural.
The ethical, legal, and technical frameworks for responsible AI are still playing catch-up to its ever-expanding list of capabilities.
In the meantime, public servants are expected to deploy tools they cannot reasonably comprehend, let alone troubleshoot.
As computer scientist Eliezer Yudkowsky warns: “The greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
Explainability becomes critical when AI is used to determine welfare eligibility, healthcare access or immigration status. Without it, even well-intentioned tools can appear arbitrary or discriminatory – and that creates reputational risk for governments.
Compounding the issue is training data.
Many large language models (LLMs) rely on open-source datasets that can house irrelevant, offensive, or geographically mismatched content.
With many advocates calling for strengthened regulatory guardrails, the Australian Public Service is working to localise and de-bias LLMs – but progress is glacial, opaque, and rarely public-facing, exacerbating community suspicion while slowing adaptation.
Regulatory inconsistencies further muddy the waters.
In 2019, the Government’s AI Ethics Principles got the ball rolling – but as per Hinton’s warnings, far more investment in AI safety is needed before public trust is earned.
A 2020 report titled The Human Right to Democratic Control of Artificial Intelligence submitted to the Australian Human Rights Commission, called for a “democratically determined legislative framework”.
Fast-forward to the present day and no framework has been legislated – exposing departments to treacherous waters in procurement, risk, and accountability.
Net result? Paralysis.
An overabundance of caution rather than confident and coordinated AI rollout. Australia’s government may not be giving this the attention and focus it deserves – to maximise the upside and minimise the downside.
Nevertheless, as AI Professor Nick Colosimo reminds us, “AI is one of the most significant inventions mankind will ever make.”
But without a baseline of legitimacy, AI risks becoming a white elephant – stranded and unable to gain traction in real-world applications in a way that catalyses more equitable, safe and inclusive communities.
Path Forward
Building trust in government AI demands more than regulatory lip service.
It requires sustained investment in transparency, safety and co-design with the communities these systems impact.
Developing sovereign AI from local datasets will promote inclusivity within Australian dialects, contexts and culture.
Yes, it will require greater investment and complexity – but it may be the only path to AI that Australians can genuinely trust to deliver its immense productivity potential.
Kuperholz offers a compelling piece to the trust puzzle: a not-for-profit framework called <M>OR<H> – Machine or Human – which watermarks digital content with verifiable metadata as AI-generated or human-made.
“It’s like a nutrition label for our information diet,” he says. “No tag, no trust” – an elegant solution enabling the required societal shift from trusting by default to distrusting by default.
Embedding certification and proof-of-origin protocols could enhance confidence in digital content and the AI ecosystems our Government will soon rely on.
Kuperholz’s adaptation of Melvin Kranzberg’s first law of technology, spliced with a quote from Bernard Shaw, is worth remembering:
“AI is not good. AI is not bad. AI is not neutral… But we can steer towards a bright AI future.”
And if we’re putting AI behind the wheel of public services, steering is the least we should demand on the road to future-proofing our systems with AI – ensuring prosperity is delivered with clarity, care and consent.




