|
Hello hello! Tell me, do you spend a lot of time reviewing the work of your team, because you’re worried AI made it up? How many times have you asked yourself these questions in the past month? "Did they actually analyze this, or did they just ask ChatGPT and copy/paste the result into a document?" “Are they missing AI hallucinations because they’re working too fast, and not paying close enough attention?” This is what I'm seeing with finance teams right now. Your junior analyst used to deliver one piece of ad-hoc analysis per month. Now they are delivering three, plus commentary, plus scenario analysis, plus a board ready presentation. This is amazing, yes? Just look at how much more productive your team is becoming. Except now you spend your entire Friday trying to figure out which parts are accurate and which parts are AI-generated guesses that put your decisions at risk. The problem is not that your team got lazy. The problem is that AI broke your review process. Your juniors can produce 10x more output now. But more output does not mean better work. And you are the one who is stuck trying to figure this out. And the worst part of this is you cannot tell which is which until you're halfway through recreating this analysis yourself. So here's the question nobody's asking: how do you review work when you don't know how it was created? Today, I'm going to show you how you can solve this. Not by stopping them from using AI, or having to review everything two times. But by using something I call The Crystal Method. A practical system that makes review faster, not slower.
|
AI didn't just make your team faster. It made them prolific.
You now have a lot more quantity, but are unsure of the quality.
Because the old review process doesn't work anymore.
You used to know how the work was created, and what questions to ask to ensure accuracy.
Now, you don't know if they spent two hours thinking through the analysis or two minutes prompting ChatGPT.
Someone on your team delivers a Google Doc titled "Revenue Forecast Analysis." You open it. It's 2,000 words of AI-generated commentary about how they could build a revenue model.
But there's no actual model. Just a document talking about the work for you to make another decision on.
It looks productive, but nothing got done in actuality.
Because they did not build anything. They generated something (and there is a big difference).
Your team is using AI. But they're not documenting:
So when you review their work, you're starting from zero, with no way to check without redoing everything.
The promise was that AI would free up your time for strategic work.
Instead, you're spending more time on quality control than ever before, trying to unpick the AI black boxes your team is creating every day faster and faster.
This is not sustainable.
So here is what you should do.
Stop adding more review layers.
Stop trying to control whether people use AI.
Stop treating every piece of work like a ‘black box’ that needs full reconstruction.
Instead, build something crystal clear.
And no, this is not another governance framework or 47-page policy document that never gets used. It's a practical system that answers three questions for every piece of work:
When your team knows how to document this, review gets faster, not slower.
Because, instead of checking everything, you are checking only the right things.
What is super important about this, is it is more of a mindset change than anything else.
Instead of trying to detect AI usage, you start explaining to your team that they should be more open about it.
The question is not whether they used ChatGPT or not.
The question is: “Did you use it to build something auditable, or did you just copy-paste an answer you cannot explain?”
Shift 2: From Inconsistent to Consistent
Right now all of your team is using AI differently. They use different tools with different prompts. Plus, different levels of verification (if at all).
With standardization, you define: "For variance analysis, we use this tool, this prompt template, plus this verification checklist."
When your team follows the same process, quality becomes consistent, and review becomes easier.
Shift 3: Review All to Spot Check
When you know how work was created, you don't need to recreate it.
You verify the critical data.
You spot-check numbers.
You check the logic.
But you're not starting from zero every time.
For this to work, you do not need prompt engineers. You do not need six months of expensive AI consulting. You just need your team to understand what a good AI method looks like and how to document it.
This is the Crystal Method.
And in the next section, I'll show you exactly how to build this in your team.
Here's the roadmap I recommend to the finance teams I train. Five stages. Each one builds on the last, and anyone can do this, right now.
Sit down with your team and openly discuss where AI actually helps and where it creates problems.
Don't start with "what can AI do?". Start with "what takes us too much time and what must be 100% accurate?"
Then apply these rules.
(a deterministic approach to accuracy is when a model or system, when given the same initial data and assumptions, will always produce the exact same outcome)
Let me show you what I mean:
Good AI Use Cases:
Personal example: In my AI Finance Club webinars, I always talk about the 88 sales tax exports I had to deal with for my compliance. ChatGPT helped me with this data in hours (and fortunately I passed the audit, and didn’t miss my deadline).
Bad AI Use Cases:
Now that your team knows what is possible in theory, you need to decide what you will actually do.
This is where you get specific about your team's approved use cases.
What to decide:
Which three to five tasks will we standardize first? Don't try to solve everything at once. Pick the highest-value, most repeatable work.
Examples:
What are we explicitly NOT doing with AI? Be clear about the boundaries.
Examples:
Remember:
Use AI to generate, not calculate.
If it's calculated, it needs to have an audit trail (e.g python code or excel file with formulas)
Don’t tell your team to "use AI responsibly" but then give them no guidance on HOW to do this.
Instead, for each approved use case, you define:
Let me show you an example.
Use Case: Monthly Variance Commentary
Tool: ChatGPT
Model: GPT-5 (Thinking)
Prompt Template:
Verification Checklist:
With this, your analyst is not creating random prompts for every piece of work. If they work from a shared template, and then just complete the variables, the quality stays consistent.
Do this for each approved use case.
Over time, you build a library. New team members can use proven prompts from day one.
Once you have proven prompts, the next level is to turn them into reusable tools.
Instead of copying and pasting a prompt template every time, you create a Custom GPT (or Microsoft Copilot, or Gemini Gem, Claude Project/Skill) that already knows the context.
Example: "Variance Commentary Assistant" Custom GPT
You create a GPT with these instructions:
Now your team just opens that GPT, answers the prompts, and gets consistent output every time.
When to do this:
If your team is using the same prompt more than twice a week, turn it into a custom GPT.
If it's a monthly task, the prompt template in a document is probably fine.
This is for teams that are ready to remove the manual work completely.
Instead of someone running a prompt every day/week/month, you automate the entire workflow.
Example: Automated Monthly Data Cleanup
Your ERP exports messy data every month. Someone has to clean it before analysis. With automation:
No human is needed for this until the data is ready to be analyzed (which also means less human error)
When to do this:
Only do this when you have a highly repeatable, well-defined task that happens frequently.
Do not automate something you have not standardized first. You'll just end up creating more mess.
Most teams don't need this stage immediately. But it's where you can go once standardization is working.
So that's how you build it. Now let me tell you what this really comes down to.
Join 270,000+ Professionals and receive the best insights about Finance & AI. More than 1 million people follow me on social media. Join us today and get 5 goodies from me!
Sponsored by Maxio Want to learn how to create a financial forecast in 30 mins using AI? Join me for a free masterclass on this Thursday, 6th November where I will provide tactical advice on how you can save time (and make better decisions) using AI to create financial forecasts. The seats are limited, so make sure to reserve your spot on the link below now, before it's too late. Register for the FREE Masterclass here Hey hey! Are you right now busy with your budget 2026? What about if I can...
Sponsored by Maxio Want to learn how to create a financial forecast in 30 mins using AI? Join me for a free masterclass on the 6th November where I will provide tactical advice on how you can save time (and make better decisions) using AI to create amazing financial forecasts. Spaces are limited, so make sure to register on the link below now so you do not miss it. Register for the FREE Masterclass Hello hello! Let me tell you about the worst headcount planning meeting I ever sat through. It...
Sponsored by Maxio A big problem I see in finance teams is too much time spent on analysis and not enough time taking action. But, Maxio, a billing and financial reporting solution, is helping finance pros use AI - not just to analyse what’s happened - but to tell them how to produce better (more accurate) results. Click the link below, and keep scrolling to see the use cases. The ones on revenue recognition, churn and board reporting are really cool. Learn How to Make AI Actionable Hello...