The new risk for leaders: automation bias

Every day artificial intelligence (AI) is becoming more ingrained in our lives – be it through AI overviews in online search queries, using the likes of ChatGPT to answer questions or help with life admin, or leveraging AI-powered tools now embedded in most workplace systems.
It’s safe to say we are now surrounded by AI, and looking ahead, this trend is only set to increase.
Interestingly, the more we see “human-AI collaboration” take hold, the more we see unexpected phenomena emerging. Phenomena such as ‘automation bias’, where people consciously or subconsciously start to over-rely on the judgement of automated systems like AI more than their own.
It would appear we are approaching an uncertain turning point, and one which could have deep implications for leaders, teams and organisations.
Outsourcing decision making to machines
We know decision making and critical thinking are skills learned over a lifetime. We build these muscles every time we flex them, and we pass on wisdom to new generations to help them learn to do the same.
But what happens when smart technologies enter the scene? Technologies that we are told have capabilities far beyond our own, even if we don’t quite understand how they work or their true limitations?
We’re seeing the answers to these questions come to light right now, and they are concerning. Younger generations have taken to AI like a duck to water, with many learning to prize its outputs over all others. Older generations may be more discerning, but as time pressures mount and the need for efficiency increases, decision fatigue sets in and the temptation to outsource to AI is understandably strong.
In doing so though, we are inadvertently creating mental shortcuts that deprive us of the opportunity to flex our own critical muscles, and the effects of this are already starting to show.
If we consider these effects in the context of a modern workplace, we can see them playing out through:
- Reputational damage caused by over-reliance on AI to produce reports that turn out to contain misinformation.
- Leadership uncertainty about the quality of information presented to them due to team reliance on AI.
- Leaders outsourcing learning and development opportunities with staff to AI, causing less experienced staff to develop bias towards AI-generated outputs.
AI-related opportunities and challenges were recently highlighted by The Hon Patrick Gorman MP in a speech at the Institute of Public Administration Australia ACT – AI Summit. He spoke of the need for public servants to learn new AI skills, noting that “…building a digital future people can trust is not just a technical challenge. It is about leadership and culture”.
He goes on to say that “Every APS value applies with every use of AI. Impartial, Committed to Service, Accountable, Respectful, Ethical, and Stewardship. ‘AI wrote it, not me’ is the 21st century equivalent of ‘the dog ate my homework’. You are personally accountable for your use of generative AI.”
This reminder of personal accountability is mirrored in AI strategies developed by departments across government, which stress that AI may support decision making, but responsibility ultimately sits with human decision makers.
When systems decide, but you are accountable
The complexity of automation bias (preferencing AI judgement over our own, even when contradictory and more accurate information is available) was discussed in a recent paper, Exploring automation bias in human–AI collaboration.
Researchers highlighted the challenges in calibrating trust and avoiding over‑reliance on AI, despite cognitive and situational influences. They also stressed the need for improved human–AI interaction designs, along with critical engagement with AI to support independent verification in decision making.
For leaders in the public service, this would require:
- Carefully balancing trust in AI with human oversight.
- Developing staff capability to critically assess AI outputs.
- Ensuring systems support transparent, accountable decision making.
Leaders would need to remain vigilant against the urge to default to AI, and prevent promoting over-reliance on AI in their teams. This preserves confidence in human decision making abilities, and ensures leaders feel less exposed when presenting to senior leaders – knowing the information they are working with is verified and well-founded.
Essentially, it’s a crucial step to maintain trust in ourselves, trust in others, and trust across an organisation. Because ultimately, the question now is not whether to use AI. It’s how do you hold your own judgement in the presence of AI? That’s the real challenge.
If decision fatigue or work overload has you at risk of burnout, please book a 15-minute coaching session. It’s free, and can offer valuable insights on how to overcome it (without relying on AI).
