Your team is delivering 10x more inaccurate work with AI (here's the fix)


Sponsored by SAP

Staying up to date with SAP was always a big advantage for me in my career.

And learning how finance teams are using AI and data to lead decision making will put you ahead.

To make sure you don’t miss out on expert insights you can use in your work and to advance your career.

Join SAP’s finance webinar series now by clicking the button below:

Hello hello!

Tell me, do you spend a lot of time reviewing the work of your team, because you’re worried AI made it up?

How many times have you asked yourself these questions in the past month?

"Did they actually analyze this, or did they just ask ChatGPT and copy/paste the result into a document?"

“Are they missing AI hallucinations because they’re working too fast, and not paying close enough attention?”

This is what I'm seeing with finance teams right now.

Your junior analyst used to deliver one piece of ad-hoc analysis per month. Now they are delivering three, plus commentary, plus scenario analysis, plus a board ready presentation.

This is amazing, yes? Just look at how much more productive your team is becoming.

Except now you spend your entire Friday trying to figure out which parts are accurate and which parts are AI-generated guesses that put your decisions at risk.

The problem is not that your team got lazy.

The problem is that AI broke your review process.

Your juniors can produce 10x more output now. But more output does not mean better work. And you are the one who is stuck trying to figure this out.

And the worst part of this is you cannot tell which is which until you're halfway through recreating this analysis yourself.

So here's the question nobody's asking: how do you review work when you don't know how it was created?

Today, I'm going to show you how you can solve this. Not by stopping them from using AI, or having to review everything two times. But by using something I call The Crystal Method.

A practical system that makes review faster, not slower.



The Review Bottleneck

This is the situation.

Your junior analyst comes to you on Wednesday with a complete quarterly business review presentation. Thirty slides with variance analysis and commentary. Plus, three scenario models and benchmark comparisons with peer companies.

Last quarter, this would have taken them two weeks.

This time, they finished it in three days!

You open the file.

Slide 5 shows a variance waterfall with detailed commentary on why OpEx increased 12%. This looks convincing. But then you spot that the commentary mentions a "headcount increase in Q2" that you are sure did not happen.

You check the data, and find there is no headcount increase.

The AI made it up. And your analyst didn't catch it because they were moving super fast.

So now you're going through this slide by slide, checking every claim. And the three days your analyst saved you are spending on verification instead.

This is the review bottleneck. And it is built on 3 problems.


Problem 1: The Volume Explosion

AI didn't just make your team faster. It made them prolific.

You now have a lot more quantity, but are unsure of the quality.

Because the old review process doesn't work anymore.

You used to know how the work was created, and what questions to ask to ensure accuracy.

Now, you don't know if they spent two hours thinking through the analysis or two minutes prompting ChatGPT.


Problem 2: Talking vs Doing

Someone on your team delivers a Google Doc titled "Revenue Forecast Analysis." You open it. It's 2,000 words of AI-generated commentary about how they could build a revenue model.

But there's no actual model. Just a document talking about the work for you to make another decision on.

It looks productive, but nothing got done in actuality.

Because they did not build anything. They generated something (and there is a big difference).


Problem 3: The Transparency Gap

Your team is using AI. But they're not documenting:

  • What was AI-generated vs. manual analysis
  • Which parts were reviewed vs. just accepted
  • What context they gave the AI
  • What logic they verified themselves

So when you review their work, you're starting from zero, with no way to check without redoing everything.

The promise was that AI would free up your time for strategic work.

Instead, you're spending more time on quality control than ever before, trying to unpick the AI black boxes your team is creating every day faster and faster.

This is not sustainable.



The Crystal Method

So here is what you should do.

Stop adding more review layers.
Stop trying to control whether people use AI.
Stop treating every piece of work like a ‘black box’ that needs full reconstruction.

Instead, build something crystal clear.

And no, this is not another governance framework or 47-page policy document that never gets used. It's a practical system that answers three questions for every piece of work:

  1. What did AI do? (Which parts were generated vs. manual)
  2. How was it done? (Which tool, which prompt, what context)
  3. What was verified? (What the human checked vs. accepted)

When your team knows how to document this, review gets faster, not slower.

Because, instead of checking everything, you are checking only the right things.


The 3 Shifts That Make This Work

What is super important about this, is it is more of a mindset change than anything else.

Shift 1: From "Did they?" to "How did they?"

Instead of trying to detect AI usage, you start explaining to your team that they should be more open about it.

The question is not whether they used ChatGPT or not.

The question is: “Did you use it to build something auditable, or did you just copy-paste an answer you cannot explain?”

Shift 2: From Inconsistent to Consistent

Right now all of your team is using AI differently. They use different tools with different prompts. Plus, different levels of verification (if at all).

With standardization, you define: "For variance analysis, we use this tool, this prompt template, plus this verification checklist."

When your team follows the same process, quality becomes consistent, and review becomes easier.

Shift 3: Review All to Spot Check

When you know how work was created, you don't need to recreate it.

You verify the critical data.
You spot-check numbers.
You check the logic.

But you're not starting from zero every time.

For this to work, you do not need prompt engineers. You do not need six months of expensive AI consulting. You just need your team to understand what a good AI method looks like and how to document it.

This is the Crystal Method.

And in the next section, I'll show you exactly how to build this in your team.



The 5-Stages of The Crystal Method

Here's the roadmap I recommend to the finance teams I train. Five stages. Each one builds on the last, and anyone can do this, right now.


Stage 1: Build Team Awareness (What's Possible, What's Not)

Sit down with your team and openly discuss where AI actually helps and where it creates problems.

Don't start with "what can AI do?". Start with "what takes us too much time and what must be 100% accurate?"

Then apply these rules.

  1. Use AI when you need to BUILD something you can audit.
  2. Don't use AI when you need a deterministic answer.

(a deterministic approach to accuracy is when a model or system, when given the same initial data and assumptions, will always produce the exact same outcome)

Let me show you what I mean:

Good AI Use Cases:

  • Drafting variance commentary from data you already analyzed - AI speeds up the writing, and you can verify the logic matches your analysis.
  • Structuring (not running) scenario analyses - AI helps you think through the structure (scenarios, assumptions etc), then you build the Excel model yourself with real formulas.
  • Cleaning and organizing messy data exports - AI can spot patterns, suggest formatting corrections, and help standardize inconsistent data.

Personal example: In my AI Finance Club webinars, I always talk about the 88 sales tax exports I had to deal with for my compliance. ChatGPT helped me with this data in hours (and fortunately I passed the audit, and didn’t miss my deadline).

Bad AI Use Cases:

  • Asking it to forecast revenue with no context - You get a number you can't explain, so when somebody asks "how did we get here?" and you cannot tell them.
  • Asking it to explain variances without seeing your actual data - AI will invent realistic sounding drivers that have nothing to do with your business reality.
  • Using it to build complete financial models from scratch - You end up with a lot of logic only the AI knows about, that you didn't design and can't defend to the board.

Stage 2: Define Your Team's Scope (What We Do, What We Don't)

Now that your team knows what is possible in theory, you need to decide what you will actually do.

This is where you get specific about your team's approved use cases.

What to decide:

Which three to five tasks will we standardize first? Don't try to solve everything at once. Pick the highest-value, most repeatable work.

Examples:

  • Drafting variance commentary from verified analysis
  • Structuring scenario modeling frameworks (not running the models)
  • Cleaning and formatting data exports from ERP/systems

What are we explicitly NOT doing with AI? Be clear about the boundaries.

Examples:

  • No AI-generated forecasts without showing your methodology
  • No asking AI to explain variances without providing actual data
  • No using AI to build complete financial models from scratch

Remember:

Use AI to generate, not calculate.

If it's calculated, it needs to have an audit trail (e.g python code or excel file with formulas)


Stage 3: Standardize by Use Case (Tool, Model, Prompt Template)

Don’t tell your team to "use AI responsibly" but then give them no guidance on HOW to do this.

Instead, for each approved use case, you define:

  1. Which tool to use
  2. Which model/mode within that tool
  3. The prompt template (with variables)
  4. The verification checklist

Let me show you an example.

Use Case: Monthly Variance Commentary

Tool: ChatGPT
Model: GPT-5 (Thinking)

Prompt Template:

I need to draft variance commentary for our monthly management report.
Context: - Department: [insert department name] - Variance: [actual vs budget numbers] - Key drivers: [list 2-3 drivers you identified] - Business context: [any relevant context like new hires, project delays, etc.]
Please draft 3-4 bullet points explaining this variance in clear language for our executive team. Focus on: 1. What happened (the numbers) 2. Why it happened (the drivers) 3. What we’re doing about it (if applicable)
Keep it factual and avoid speculation. Use professional but conversational tone. When you don’t know, ask for more details. Use file attached as your basis and mention your facts.

Verification Checklist:

  • [ ] Numbers cited match source data
  • [ ] Drivers mentioned are the ones I identified (not AI guessing)
  • [ ] Tone is appropriate for our leadership team
  • [ ] No made-up context or assumptions

With this, your analyst is not creating random prompts for every piece of work. If they work from a shared template, and then just complete the variables, the quality stays consistent.

Do this for each approved use case.

Over time, you build a library. New team members can use proven prompts from day one.


Stage 4: Create Reusable Tools

Tool that will help: Copilot Agents

Agents extend what you can do with Microsoft 365 Copilot. You can customize your Copilot experience by connecting to your organization’s knowledge and data sources, as well as by automating and executing business processes. These AI-driven agents can perform a variety of tasks, working alongside you to offer suggestions, automate repetitive tasks, and provide insights to help you make informed decisions.

Find out more here.

Once you have proven prompts, the next level is to turn them into reusable tools.

Instead of copying and pasting a prompt template every time, you create a Custom GPT (or Microsoft Copilot, or Gemini Gem, Claude Project/Skill) that already knows the context.

Example: "Variance Commentary Assistant" Custom GPT

You create a GPT with these instructions:

  • You are a finance analyst assistant specializing in variance commentary
  • Always ask for: department, actual vs budget numbers, identified drivers, business context
  • Draft 3-4 professional bullet points suitable for executive review
  • Never speculate on drivers not provided by the user
  • Always show your work (what numbers you're referencing)

Now your team just opens that GPT, answers the prompts, and gets consistent output every time.

When to do this:

If your team is using the same prompt more than twice a week, turn it into a custom GPT.

If it's a monthly task, the prompt template in a document is probably fine.


Stage 5: Full Automation (Optional Advanced Stage)

This is for teams that are ready to remove the manual work completely.

Instead of someone running a prompt every day/week/month, you automate the entire workflow.

Example: Automated Monthly Data Cleanup

Your ERP exports messy data every month. Someone has to clean it before analysis. With automation:

  1. ERP exports data to a shared folder (automatic)
  2. n8n workflow triggers when new file appears
  3. Python script cleans the data using your standardized rules
  4. Cleaned file lands in your analysis folder
  5. Slack message notifies the team it's ready

No human is needed for this until the data is ready to be analyzed (which also means less human error)

When to do this:

Only do this when you have a highly repeatable, well-defined task that happens frequently.

Do not automate something you have not standardized first. You'll just end up creating more mess.

Most teams don't need this stage immediately. But it's where you can go once standardization is working.

So that's how you build it. Now let me tell you what this really comes down to.



The Bottom Line

Building something crystal clear requires something most finance leaders struggle with.

Letting go of the idea that you can control AI usage through policies and approvals.

You cannot stop your team using AI.

But, you can start encouraging your team to use it more openly.

Plus, starting today, you can start defining the best use cases so your team can grow together (without you having to check everything two times).

All it takes is:

  1. Team awareness
  2. A defined AI scope
  3. Standardization for team consistency
  4. Right-sized tools to make consistency easier
  5. Automation where justified to cut out manual work

The key to all of this:

Check the method.

Don’t redo the maths.



Your Move

Here's what I know from working with hundreds of finance pros over the past year.

The ones leading with AI didn't wait. They didn't spend six months building policy documents that nobody reads (or uses).

They trained their people on what good looks like.

So take 60 mins this week to sit with your team. Then ask them where they're already using AI, what is working, and what is not.

That conversation is your starting point.

Because your team is using AI whether you train them or not.

So let’s make sure they use it in a clear and transparent way.

Best,

Your AI Finance Expert,

Nicolas

P.S. - What did you think of this approach? Hit reply and let me know if you're planning to try this for your team (I read all replies).

P.P.S. - If you enjoyed this, you’ll love my recent YouTube video, “How to Train Your Finance Team on AI in 30 days (Full Guide)”

Watch it now!

Tips & Insights on Finance & AI

Join 270,000+ Professionals and receive the best insights about Finance & AI. More than 1 million people follow me on social media. Join us today and get 5 goodies from me!

Read more from Tips & Insights on Finance & AI
AI did my 2-hour Excel task in 5 minutes

Sponsored by Maxio Want to learn how to create a financial forecast in 30 mins using AI? Join me for a free masterclass on this Thursday, 6th November where I will provide tactical advice on how you can save time (and make better decisions) using AI to create financial forecasts. The seats are limited, so make sure to reserve your spot on the link below now, before it's too late. Register for the FREE Masterclass here Hey hey! Are you right now busy with your budget 2026? What about if I can...

How to build your headcount without spending hours in Excel

Sponsored by Maxio Want to learn how to create a financial forecast in 30 mins using AI? Join me for a free masterclass on the 6th November where I will provide tactical advice on how you can save time (and make better decisions) using AI to create amazing financial forecasts. Spaces are limited, so make sure to register on the link below now so you do not miss it. Register for the FREE Masterclass Hello hello! Let me tell you about the worst headcount planning meeting I ever sat through. It...

Creating presentations with AI

Sponsored by Maxio A big problem I see in finance teams is too much time spent on analysis and not enough time taking action. But, Maxio, a billing and financial reporting solution, is helping finance pros use AI - not just to analyse what’s happened - but to tell them how to produce better (more accurate) results. Click the link below, and keep scrolling to see the use cases. The ones on revenue recognition, churn and board reporting are really cool. Learn How to Make AI Actionable Hello...