The story of how an enterprise finance and tax leader incorporated AI to improve their operations.
Imagine this: The year is 2025, and the CFO of a global enterprise asks her Head of Indirect Tax, “Can we trust the new AI system with our VAT filings?”
A year ago, the answer would have been a firm no. Tax was about exactitude. The idea of a machine making judgment calls sent shivers down tax directors’ spines.
But the Head of Indirect Tax now smiles and says, “We can trust it to help us be more compliant than we’ve ever been, as long as we keep our eyes open.”
What caused that shift? In this post, I’ll be using my experience speaking with tax teams at organizations like Meta and Netflix to describe a fictional (but representative) organization’s approach to illustrate:
- Whether AI is actually trustworthy enough for tax teams
- Critical guardrails humans need to keep tax AI in check
- Risks that AI introduces (and the risk of ignoring it altogether)
To advance, tax teams must let go of the illusion of certainty
Traditional, deterministic tax technology operated on hard-coded rules: if X happens, apply Y. This made compliance feel predictable and safe. That is, as long as transactions followed expected patterns.
But tax rarely works in absolutes. When real-world complexity creeps in, edge cases are either missed by deterministic tech or handled manually. The result was a comforting but misleading sense of certainty, which often masked hidden risk.
Probabilistic AI, on the other hand, is upfront about uncertainty. It says, “I think this is 93% correct.” At first, that makes us uneasy; we prefer 100%. Yet, as the tax team at our story’s company learned, 93% of 100,000 transactions reviewed by AI can be better than 100% of 10,000 reviewed by humans. Breadth and vigilance at scale trump perfect precision in a narrow lane.
The AI that cried wolf and uncovered real issues
Let’s go back to the Head of Indirect Tax at our example organization. Over the past year, her department undertook a journey of experimentation. They deployed Fonoa’s AI-powered anomaly detector across their invoice data. In the first week, it flagged 500 transactions as unusual.
Unfortunately, because the AI wasn’t 100% accurate, many ended up being false positives. Yet buried in the list were two costly issues: one due to an outdated tax rate, another from a missing VAT charge caused by a system glitch. Their old rules based processes had missed both.
The lesson: AI isn’t perfect, but it surfaces issues at scale that deterministic systems and processes never saw. One team member summarized it best: “The AI has bad bedside manner, but it’s a bloodhound for finding our mistakes.”
Shift AI evaluation from accuracy to compliance outcomes
The team stopped asking, “Is the AI always right?” and instead asked, “Is AI helping us get it right more of the time?” They introduced triage workflows, refined the AI’s sensitivity, and adopted risk based reviews.
Eventually, the number of flags dropped from 500 to 200, manageable and meaningful. Tax analysts even began enjoying the detective work. It was a welcome change from the monotony of manual checks.
Within a quarter, internal audits found 30% fewer errors in tax filings. The indirect tax team reported zero issues in their latest VAT audit, a huge improvement over past years.
Their success wasn’t about eliminating mistakes. It came from managing AI’s imperfections intelligently. The AI was treated like a junior advisor: tireless but needing supervision. With human oversight, performance soared.
AI-human collaboration elevates the role of tax teams
Consider the broader implication: tax compliance is now a team sport between humans and AI.
Humans bring the conscience, the context, the nuanced judgment that machines lack. AI brings the data muscle, the pattern recognition, the relentless consistency that humans can’t sustain.
In the old deterministic world, humans often acted like machines: ticking boxes, reconciling numbers. In the new world, paradoxically, humans get to be more human. They focus on analysis, decision-making, and strategy, while the AI churns through the grunt work and presents findings.
One tax director described it as, “Elevating my team from tax preparers to tax interpreters.” The conversations in the office have changed from “Did we enter all the data?” to “What is the data telling us? Where are we exposed? Where can we save?” AI opened that door.
This human-in-the-loop model created a cultural shift. AI didn’t replace the team, it freed them to become strategic business partners.
Risk isn’t new, AI just makes it more visible
Of course, this transition doesn’t come without anxiety. Let’s address the elephant in the room: risk.
Every time an AI makes a bizarre mistake, it makes headlines. A “hallucination” by a chatbot, an algorithm that amplifies a bias. Tax executives see those and worry: “What if the AI suggests something crazy like not charging VAT on everything next month?”
Tax leaders are right to worry about AI errors. But the truth is, human errors already carry risk in tax operations. The fear stems from AI’s unfamiliarity. One way to bridge that gap is shadow testing, running AI behind the scenes, without acting on its suggestions until they could be relied upon with greater confidence and consistency.
That’s what this company did. Over time, they found AI often aligned with human judgment. When it didn’t, it was about 50/50 whether the AI or the human was right. Those odds were enough to earn it a seat at the table.
Governance gives teams confidence, not constraints
At first, the company’s compliance leaders were wary of AI governance. They feared it would introduce red tape and slow everything down. But once implemented, governance became a confidence booster.
By documenting decisions, tracking model versions, and requiring sign-offs, the team didn’t feel limited. They felt secure. It was like adding safety rails on a high balcony. Once you trust the rails, you can walk closer to the edge and enjoy the view.
With proper oversight and logs in place, the team felt comfortable letting AI handle more. They even began allowing it to automate certain decisions, knowing they could always step in and make corrections. The mindset shifted from “What if AI messes up and we get in trouble?” to “If AI messes up, we’ll catch it and learn from it, just like we do with human errors.”
After all, no process is zero-error. The goal is to manage error, not pretend it doesn’t exist.
The real risk is standing still
After the Head of Indirect Tax explained the team's journey, the CFO asked, “So, are we taking on more risk or less?”
The answer: “In some ways more, in most ways far less.”
Yes, AI will be wrong sometimes. But with visibility into far more transactions, the team traded a few manageable errors for the chance to catch the big, invisible ones. That’s smart risk management: it’s better to face known risks than operate in the dark.
Tax regulators aren’t shying away from technology
You may be wondering: what is the role of the tax authorities in AI applications for tax? We already can assume that the regulators are keeping an eye on the potential benefits for them when AI is used effectively.
Think of a scenario in the near future: A tax authority, noticing that companies using a certain AI process have significantly fewer errors in filings, starts encouraging its use or even including it in Tax Control Framework agreements. It’s not far-fetched, given that governments already provide software licensing and certifications for e-invoicing solutions.
Companies that figure out this balance early are engaging with regulators to shape the narrative.
Tax’s path to AI requires a cultural shift, not just a tech one
The real transformation for a tax team to adopt AI usage is cultural. Letting go of the need for 100% certainty is what allows teams to become more accurate, more often, and at greater scale.
AI is elevating tax from a back-office function to a strategic partner. But it only works when paired with good judgment and strong governance.
Those who cling to rigid rule sets may feel safer in the short term. But they risk missing more in the long run. Those who adapt and manage uncertainty will move faster and catch more.
We’re not throwing out the rulebook. We’re making it smarter.
AI doesn’t replace tax professionals. It frees them to lead, orchestrate, and interpret. Accuracy still matters, but now it’s measured in outcomes, not inputs.
One day, we’ll look back and ask how we ever managed this work with static rules and brute force. The answer: we barely did. Now we run alongside the data, sometimes a step ahead.
This story can be real with the right technology in place
The AI-forward tax team described here isn’t just a fairytale. It’s already taking shape inside leading organizations that are willing to challenge old assumptions, embrace uncertainty, and invest in the right tools.
Some companies are building tools internally. But for most, it makes more sense to adopt third-party solutions purpose-built for scale.
At Fonoa, we’re embedding AI into every layer of our platform, from research and anomaly detection to classification and dynamic rules. The future of indirect tax is scalable, adaptive, and collaborative. And AI is how we get there.
Ready to see how AI can transform your global indirect tax operations? Connect with our team to explore Fonoa’s AI-powered platform built to automate your entire tax lifecycle, reduce risk, and unlock strategic value at scale.