Skip to content
Newsletter

Will AI Ever Replace Auditors? What the Evidence Really Says

Jimmy Braimah
Jimmy Braimah

The Partner Brief | Edition 7

AI has moved from theory to live deployment in core audit workflows, from full‑population testing and anomaly detection to journal‑entry screening and continuous monitoring. This edition looks past the hype: what AI is already doing well, where its limits are, and why the evidence points to augmentation and NOT replacement of human auditors.

AI in Audit: Hype vs Measurable Change

Across the profession, AI has moved from conference slides to live deployments inside core audit workflows. Global networks and mid‑tier firms are piloting or scaling tools for data ingestion, anomaly detection, journal‑entry testing, and end‑to‑end file reviews. Regulators and standard‑setters now treat technology as a mainstream audit issue rather than a side topic: both the IAASB and national regulators have issued guidance on data analytics and emerging technologies in audits, explicitly recognising their growing use in risk assessment and substantive testing. 1, 2

The crucial shift is conceptual: AI is no longer seen as “another piece of software”, but as an enabler of different audit strategies, full‑population testing instead of pure sampling, more dynamic risk assessment, and continuous assurance models in some sectors. 3, 4

So, is this the beginning of the end for human auditors or the start of a different kind of profession?

What AI is Already Doing Well in Audit

Current deployments of AI and advanced analytics focus on areas where machines clearly outperform humans on scale and speed:

  • Full‑population testing and outlier detection. Machine‑learning and rules‑based analytics can test 100% of transactions for defined characteristics (e.g. unusual dates, amounts, counterparties, or approval patterns), rather than relying solely on sampling. 3, 5 This allows firms to surface outliers, clusters and unusual relationships that manual methods are unlikely to detect, particularly in high‑volume environments.
  • Journal‑entry and fraud‑risk screening. Research and regulator commentary show growing use of AI and analytics to profile journal entries by risk factors (for example, postings near period end, manual entries, or certain account combinations) and to prioritise them for human review.2, 5  Even when models are relatively simple, they materially increase coverage of higher‑risk items.
  • Automated matching, reconciliations and data preparation. Studies of audit automation and robotics process automation (RPA) in accounting functions highlight significant time savings and error reduction when repetitive, rules‑based tasks such as three‑way matches, roll‑forward schedules and basic reconciliations are automated.⁶ This same pattern holds in practice firms: automation reduces manual grind and frees scarce staff time for higher‑judgement work.
  • Enhanced risk assessment and anomaly surfacing. By combining historical data, external indicators and real‑time transactional feeds, AI tools can flag segments, entities or balances with unusual patterns (e.g. revenue spikes inconsistent with peers, or evolving related‑party networks).3, 4 This supports more targeted audit responses and aligns with regulators’ expectations for robust, data‑informed risk assessment.

The empirical evidence so far points to productivity gains and improved anomaly detection—not full automation of the audit opinion 1 3 5

The Benefits: Quality, Efficiency and Talent

Properly implemented, AI and audit automation offer three reinforcing benefits.

  • Audit quality and coverage

  • Efficiency and cost‑to‑serve

  • Busy‑season sustainability and retention

Used thoughtfully, AI can therefore support both audit quality and the human sustainability of the profession.

The Hard Limits: Data, Models, and Accountability

The same evidence base also highlights structural limitations and risks that rule out “AI‑only” audits under current frameworks.

  • Data quality and explainability. AI systems are only as reliable as the data they receive and the models they use. Regulatory and standard‑setter commentary stresses that auditors must understand how tools work sufficiently to evaluate their output, including model assumptions, training data and limitations.¹ ² Black‑box models that cannot be explained or challenged are incompatible with professional scepticism and existing standards.
  • Cybersecurity and data protection. Greater reliance on centralised platforms and large data sets increases the impact of cyber breaches. Professional guidance on technology in audit emphasises the need for robust IT general controls, data‑governance frameworks and supplier risk management to protect client data.¹ ⁴ For partners, this becomes both a practice‑risk and reputational‑risk issue.
  • Bias and unintended consequences. Where AI models are trained on historical patterns (for example, past fraud cases or anomaly flags), they may embed historic biases and blind spots. Academic literature on algorithmic bias warns that models can under‑detect issues in areas that have historically been under‑investigated.⁸ Human oversight is required to question what the model is not flagging.
  • Legal and professional responsibility. Existing audit standards, whether ISA (UK), PCAOB or other regimes, place responsibility squarely on the engagement partner and firm, not on the tool vendor.¹ ² An AI system can support evidence gathering and risk assessment; it cannot sign the report, hold a practising certificate, or bear legal accountability

For these reasons, leading professional bodies and regulators consistently frame AI as an enabler within the audit process, not a replacement for the human auditor’s responsibility. 1, 2, 4

How AI is Changing and Not Replacing the Auditor’s Role

The most credible research and profession commentary converge on a “centaur model”: technology plus human judgement. 3, 4, 6, 7

In practice, this means:

  • Less time on manual grind, more on judgement. As machines handle data extraction, matching and first‑level tests, human auditors should spend more time on exercising professional scepticism: challenging management narratives, probing complex estimates, and evaluating business‑model risk.3, 4, 5
  • New skills: from spreadsheet fluency to data literacy. Auditors increasingly need to understand data structures, model logic and basic statistics to evaluate AI‑generated insights. Professional bodies are updating syllabi and CPD requirements accordingly, emphasising data analytics, technology governance and critical thinking.3, 4, 7
  • Interpreting and explaining AI outputs. Being able to question why certain items were flagged (or not flagged), understand false positives, and communicate limitations to audit committees becomes a core part of the role. Regulators expect auditors not only to use advanced tools, but to demonstrate that they can critically evaluate those tools’ outputs.2, 4
  • Ethics and public interest at the centre. No matter how advanced the technology, key questions: materiality, sufficiency of evidence, going‑concern judgements, and the ethical dimension of audit decisions cannot be delegated to a model. Professional codes explicitly require human judgement and a public‑interest mindset that AI does not possess.¹ ²

In that sense, AI does not so much replace auditors as raise the bar on what being an auditor involves.

Where Audit Technology is Heading Next

Most forward‑looking analyses point to a convergence of AI with other technologies rather than AI in isolation:

  • Continuous auditing and monitoring. With transactional data increasingly available in near real‑time, firms and technology providers are piloting continuous assurance models in specific contexts (e.g. high‑volume transactional businesses or regulated sectors).³ ⁴ AI is critical to filtering signal from noise at this scale.
  • Integration with distributed ledgers and secure data‑sharing. Exploratory work around blockchain and similar technologies suggests potential for direct verification of transactions and ownership interests.⁹ When combined with AI analytics, this could change how certain assertions are tested, but still requires human judgement on presentation and disclosure.
  • Sector‑specific models and risk libraries. Over time, AI systems can be trained on sector‑specific patterns of normal and abnormal activity (for example, in banking, insurance, or complex revenue environments).³ This will likely make audit responses more tailored but also raises new questions about vendor concentration and systemic model risk.
  • More granular regulatory expectations. As usage matures, expect more explicit regulatory guidance on acceptable uses of AI, documentation requirements, and expectations for model validation, back‑testing and governance.¹ ² Partners will need to treat audit‑tech strategy as part of firm‑wide risk management, not just IT procurement.

For mid‑tier firms, the strategic question is not “if” these trends arrive, but how to engage with them without losing control of cost, risk, or people.

So, Will AI Ever Replace Auditors?

On the weight of current evidence, the answer is NO, for both technical and normative reasons.

Technically, AI excels at pattern recognition, large‑scale testing and anomaly surfacing, but still struggles with context, counterfactual thinking and complex, judgement‑heavy scenarios, precisely where many of the most important audit calls sit.³ ⁴ ⁸

Normatively, society and regulation expect a human professional to stand behind the audit opinion, accountable for exercising scepticism, ethical judgement, and a public‑interest mandate that cannot be encoded into a model alone.¹ ²

What AI will do, and is already doing is rewrite the economics and talent dynamics of audit:

  • Firms slow to adopt credible AI‑enabled approaches risk higher cost‑to‑serve, weaker coverage, and a less attractive proposition for talent who increasingly expect to work with modern tools.⁶ ⁷
  • Firms that move thoughtfully can improve quality, free human capacity for higher‑order work, and use technology as part of a more sustainable, attractive working model for the next generation.

For partners, the real question is not “Will AI replace auditors?” but:

  • How will we redesign our audit methodology, talent model and technology stack so that AI amplifies, rather than erodes our professional judgement, culture, and client relationships?

The firms that answer that question clearly, and invest accordingly, are the ones most likely to protect audit quality, deepen client trust, and win the next generation of audit talent.

Baker Thornton’s Take

From our vantage point, supporting audit practices across the UK, US and Canada we see AI in audit playing out in three very practical ways.

First, firms that move early on audit tech are already competing differently for talent. Senior associates and managers now ask explicit questions about data analytics tools, automation and how much “real audit thinking” they will get to do versus manual grind. Where firms can credibly say, “We use AI and analytics so you spend more time on judgement, not spreadsheets,” those roles win on quality of work, not just pay. Where they cannot, mid‑career people increasingly look to competitors, consulting, or industry.

Second, clients are quietly recalibrating their expectations of what a modern audit looks like. Partners tell us that sophisticated finance teams now ask how their auditor is using data analytics and automation, and how coverage has improved versus traditional sampling. AI is becoming part of the credibility story with boards and audit committees: it does not replace the partner’s signature, but it shapes perceptions of rigour and insight.

Third, we see a clear split between firms that treat AI as an IT project and those that treat it as a people and operating model question. The successful group:

  • Link AI adoption directly to busy‑season relief and retention (“Here’s how this will change your day in January, not just our marketing deck”).
  • Invest in data literacy and scepticism skills alongside tools, so staff can challenge and interpret outputs rather than blindly trust them.
  • Build clear narratives for candidates: how audit is changing, what they will learn, and how tech supports not undermines their long‑term career.

For partners, the risk is not that AI will replace auditors; it is that firms who fail to harness it will become less attractive to both clients and the very people their future depends on.

Our view is simple: AI is now part of your employer brand as much as your methodology. The firms that will thrive are those that use it to lift audit quality, create more interesting work for their teams, and send a clear signal to the next generation that audit is a forward‑looking career, not a legacy one.

About Baker Thornton

Baker Thornton specialises in connecting accounting firms in the UK and CPA practices in USA and Canada with qualified audit, tax, and accounting professionals who possess deep understanding of practice environments and operational demands. Our focus is on sustainable placements that enhance your team's technical capabilities for emerging regulatory and market challenges, from MTD implementation to capacity planning for peak periods.

Should you be developing your 2026-2027 workforce strategy, we welcome the opportunity to discuss your practice's specific requirements.

Bibliography

  1. International Auditing and Assurance Standards Board (IAASB) (2020) Exploring the Growing Use of Technology in the Audit, with a Focus on Data Analytics. New York: IAASB.
  2. Public Company Accounting Oversight Board (PCAOB) (2023) Staff Spotlight: Data and Technology in the Audit. Washington, DC: PCAOB.
  3. ACCA (2019) Machine learning: more science than fiction. London: Association of Chartered Certified Accountants.
  4. ICAEW (2018) Artificial intelligence and the future of accountancy. London: Institute of Chartered Accountants in England and Wales.
  5. Appelbaum, D., Kogan, A. and Vasarhelyi, M. (2017) ‘Big Data and Advanced Analytics in the Audit Profession’, CPA Journal, 87(6), pp. 14–19.
  6. Deloitte (2018) The robots are ready: are you? Untapped advantage in your digital workforce. London: Deloitte LLP.
  7. ACCA (2023) Global Talent Trends 2023. London: Association of Chartered Certified Accountants.
  8. Selbst, A.D. and Barocas, S. (2018) ‘The intuitive appeal of explainable machines’, Fordham Law Review, 87(3), pp. 1085–1139.
  9. Kokina, J. and Davenport, T.H. (2017) ‘The emergence of artificial intelligence: how automation is changing auditing’, Journal of Emerging Technologies in Accounting, 14, pp. 115–122.

 

 

 

 

Share this post