Artificial intelligence is transforming accounting and advisory services, promising faster data analysis, streamlined compliance, and predictive insights. Yet, a recent Deloitte incident has reminded professionals that with every leap in innovation comes a risk of over-reliance.
The report, commissioned by the Department of Employment and Workplace Relations (DEWR) for $440,000, was found to contain significant factual errors, including fabricated academic references and a false quote attributed to a Federal Court judge.
Deloitte has acknowledged that parts of the report were produced using AI tools and, according to multiple media reports, has agreed to refund a portion of the project fee. The corrected version of the report has since been reissued, with the Department stating that its core findings remain unchanged.
The Problem with “Hallucinations”
AI “hallucinations” occur when generative systems produce text or data that appear credible but are entirely made up. This happens because large language models are trained to predict patterns, not verify facts. When the dataset lacks relevant information, the AI fills in gaps by constructing convincing but inaccurate outputs.
In accounting and advisory work, such errors are not just technical glitches, as they can erode client trust, breach professional standards, and misinform strategic decisions. The Deloitte case is a high-profile example of what can happen when speed and automation outweigh validation and oversight.
Why Accountants Should Care
While most firms aren’t commissioning government-level reports, AI has quietly entered everyday accounting processes, from summarising tax updates to drafting advice or generating client reports. When used without supervision, the same risks can emerge: inaccurate interpretations of tax law, unverified financial assumptions, or recommendations that don’t hold up under audit.
This makes human oversight and ethical governance critical. Accountants, bookkeepers, and advisers must remain the gatekeepers of accuracy. Accounting bodies are also beginning to issue guidance reminding practitioners that professional judgment cannot be outsourced to technology.
Responsible AI Adoption: Key Safeguards
To use AI effectively and safely, firms can:
- Validate all AI outputs. Treat generated content as a draft, not a finished product.
- Use trusted data sources. Ensure AI systems connect to verified databases or internal records.
- Maintain version control. Keep clear documentation of how advice and reports were produced.
- Train staff continuously. Equip teams to spot AI inaccuracies and understand when to override automation.
- Build an ethical framework. Adopt internal policies that define acceptable AI use within the firm.
What This Means for Clients
For business owners and clients, the message is also clear: AI tools can improve efficiency, but they should never replace professional advice. The value of a skilled accountant or adviser lies in interpretation, discernment, and accountability, which are qualities that no algorithm can replicate.
At Supervision, we see technology as an enabler, not a substitute for expertise. We use digital tools to streamline bookkeeping, compliance, and reporting processes, but we ensure that every output is reviewed and verified by professionals. Responsible innovation, where human insight guides automation, is what keeps advice trustworthy and compliant.
AI is changing how the industry operates, but as the Deloitte case shows, the fundamentals of professional integrity remain the same. Technology can enhance speed and scope, but accuracy and trust still come from people.




