
Designing AI That Serves Us: How Human-Centred AI is Shaping Public Services
Scroll
Article summary
- Human-centred AI prioritises dignity, accountability, and inclusion over speed or cost savings, aligning intelligent systems with real human needs, not just data and efficiency goals.
- In healthcare, AI’s first major impact will be reducing admin workloads, freeing up clinicians’ time while demanding transparency and clear accountability to build public trust.
- Designers play a pivotal role in shaping responsible AI, ensuring systems are trustworthy, transparent and resilient to failure, especially in high-stakes public sector settings.
By Nikki Stefanoff
In case you hadn’t noticed, artificial intelligence (AI) is no longer just hype. It’s the online chatbot answering your banking questions, the facial recognition scanning you through airport security, the tool predicting what you’ll click on next.
But as AI shifts from convenience to critical infrastructure, a bigger question looms: are these systems genuinely improving life for the people they’re meant to serve, or just making things faster and cheaper for the organisations behind them?
Across the public and private sectors, there’s growing recognition that how we design, develop and deploy AI matters deeply. It’s not about how quickly we adopt AI, but whether we can do so responsibly, transparently, and with care.
So, what role can human-centred designers play in keeping these systems accountable, inclusive, and fair? And what can we learn from how it’s already playing out, particularly in complex spaces like healthcare?
Enter, human-centred AI.
Human-centred AI: the basics
At its heart, human-centred AI is about keeping people, not just data or efficiency, at the centre of how we’re building intelligent systems.
As Dr Fei-Fei Li, co-founder of Stanford’s Human-Centered AI Institute, puts it: AI should “augment humanity, not replace it.”
That means aligning AI with human values, needs, and lived experiences. It means designing for dignity, not just optimisation.
And it means embedding accountability, transparency and care into the systems that increasingly shape our lives.
Global shifts and local signals
Around the world, this thinking is gaining traction.
The UK’s Alan Turing Institute is pioneering participatory governance models that involve civil society and affected communities in shaping how AI is developed and regulated, and earlier this year, the UK Government launched Humphrey – a suite of AI tools designed to streamline civil service operations.
All part of a broader push to modernise public services while protecting human oversight.
In Australia, we’re at a pivotal moment. While AI is already being used in government, formal frameworks for its responsible use are still emerging.
The CSIRO’s Responsible AI Toolkit and the Australian Human Rights Commission’s calls for stronger algorithmic accountability point to the growing urgency.
Trust, transparency and the human cost of AI in healthcare
Dave Heacock is a Canberra-based digital strategist advising the Department of Health and Aged Care.
He’s part of a team shaping the future of clinical decision-making, national data standards, and the ethical use of machine learning and generative AI in public health.
The potential of AI in healthcare is huge, Heacock says, but so are the risks. One of the biggest challenges is decision-making.
“Imagine you’re triaging an elective surgery list – a decision that impacts people’s lives. AI can either make that call or support a human to make it.
“Either way, transparency is critical,” he says.
“Patients need to know how that decision was made. Was it fully automated? Was it human-led with AI support? Where did the data come from? Can we see the algorithm?
“That’s what builds trust.”
Reliability is just as important.
“There’s still no clear roadmap for dealing with hallucinations in AI models,” he adds.
“And in healthcare, that matters.”
AI’s first real job in healthcare? Cutting the admin
So where does Heacock think AI will show up first in public healthcare?
“Taking away the administrative drudge,” he says. “That’s the big value proposition: giving clinicians more time with their patients.”
The private sector is already paving the way. AI scribes like Heidi are helping reduce paperwork, and voice interfaces are being developed so clinicians can dictate notes as they go.
“If I’m moving between wards, I don’t want to stop and type,” Heacock says. “AI can do that admin work so I can focus on care.”
Another promising area is clinical decision support and tools that assist, not replace, human expertise.
“There’s a project underway to expand pharmacists’ roles in treating UTIs, a condition that takes up a lot of GP time,” Heacock explains.
“For that to work safely, AI needs to guide the process by asking the right questions and flagging risks, all while keeping the human firmly in the loop.”
When AI gets it wrong, design needs to get it right
But while AI’s early wins may come from reducing admin, Heacock says the real challenge for designers lies in what comes next.
“Stay healthily sceptical about what AI can and can’t do. Understand its limitations and design with failure in mind.”
That means building off-ramps. If a chatbot gets it wrong, users need a fast path to a human. If a system misfires, it must be easy to course-correct.
“People are far more forgiving of human error than machine error,” Heacock says. “One bad answer and they’ll lose trust in the whole system.”
And then there’s accountability.
“If I ask a colleague for clinical advice, they’re accountable. But with AI, that’s murky and that unsettles people.
“Designers need to think hard about that.”
Designing the next chapter
As government agencies start investing in AI and shaping strategy around its use, the role of design is shifting. It’s no longer just about user flows or service journeys, it’s about ethics, systems, and trust.
Heacock says designers have a critical role to play but only if they stay grounded.
“Designers need to understand AI’s limitations. That has to be central to their work,” he says.
“Because ultimately, AI shouldn’t just be intelligent. It should be wise.”