How to keep your finance data safe using AI in 3-steps (plus safe list inside)


Sponsored by SAP

Staying up to date with SAP was always a big advantage for me in my career.

And learning how finance teams are using AI and data to lead decision making will put you ahead.

To make sure you don’t miss out on expert insights you can use in your work and to advance your career.

Join SAP’s finance webinar series now by clicking the button below:

Hello hello,

People ask me all the time this: “Nicolas, how do we keep our data safe using AI?”

And, you know what still shocks me, even now?

Every week I see finance teams using free AI tools that put their company data at risk.

When I give finance training, I see a lot of people still using ChatGPT, the free version. Or, the wrong version of Copilot that is meant for personal use, not business use.

With free AI tools, your data gets stored indefinitely unless you manually opt out. If you do not opt out, that information will be used to train future models.

Here is the problem with this. Most finance leaders are in "deny and hope" mode.

They know that their teams are using the wrong AI tools without approval (Shadow IT). They just don't want to deal with it. So, they pretend like this is not happening.

But now, your sensitive financial data is now in servers that you do not control. Stuff like, pre-announcement earnings, M&A models, customer pricing. Or worse, personal employee data or customer information that could lead to GDPR fines

And here is what’s super important.

This is not just a compliance risk.

This is an audit risk.

The UK's Financial Reporting Council (FRC) just published guidance on the use of AI during audits. The message is that auditors need to understand what AI tools you're using, and what controls exist around them.

So let me ask you. Do you know which AI tools your team is using right now? Do you know what data they're putting into those tools?

The good news is you don't need to ban AI.

You just need a practical governance framework. And that's what I'm going to show you today.


Gray Zone Paralysis

Here's what I see happening in most finance teams right now.

Leadership knows AI tools are being used. But nobody wants to be the person who creates the policy. So it sits in this gray zone where people ignore the problem.

Some companies try to ban everything “We’re blocking ChatGPT” - Done.

But, here is the thing. Your team is still using it!

They just do it on their phones instead of their work laptops. Or they use Claude instead of ChatGPT. Or they call it "research assistance" and keep going.

You cannot block what you cannot see (this is called shadow IT).

Shadow IT is when your employees use software, apps, cloud services, or device that employees use that are not approved by the business.

Other companies take the opposite approach: "Use whatever tools help you work faster." No guidance at all with a high potential for risk.

Both approaches create this risk.

Your experienced FP&A manager knows not to put sensitive data into free tools. She uses the paid versions with proper data controls.

But your new hire in accounts payable? He just learned that ChatGPT can reconcile your vendor statements faster. So he uploads the full AP aging report, including the vendor’s bank details and payment terms.

He's not trying to cause a data breach. He's just trying to close the month on time so he can go home on time.

And here's what makes this worse. Your team is stuck between productivity and compliance, with no clear instructions.

This week, I spoke to a CFO who said: "I almost feel like I'm paralyzed because of what I don't know and I don't want to go down the wrong path... this is one of those decisions where it could be quite consequential if you do it wrong."

And he is dead right, this problem will not go away if you do not make the right decisions now.

Your team needs to move faster. And AI tools will help them do this. But without clear guidance on what's safe versus what's risky, you're creating a big compliance gap.

And when your auditors ask about AI controls in your financial reporting process? "We told people not to use it" isn't a control. This is just hoping.

So how do we solve this?


Controlled Adoption

This is what you need to do: ‘Controlled AI adoption’.

A practical framework that protects your data while letting your team use the tools that make them productive.

Instead of trying to control whether people use AI, you control what data goes into which tools. And this is something you can actually enforce.

Here is the important thing to remember:

Not all AI tools are created equal when it comes to data protection.

The free version of ChatGPT stores your data indefinitely and uses it for training. But, the Enterprise version of ChatGPT with proper business agreements? This is different. Your data stays yours, gets deleted on your schedule, and never trains the model.

It is the same interface, but much lower risk.

So the framework is simple. Match your data sensitivity to the right plan.

The public information that is on your website already? It’s fine to use free tools.

But, the sensitive financial data or customer information? Premium tools only. Tools with SOC 2 certification, GDPR compliance, and contractual data protections.

Plus, anything that goes into a financial statement or audit workpaper? This requires human review before you can use it.

This is no longer a ‘nice to have’.

This is what your auditors will ask about in your next review.

So here's what I'm going to show you. A three-layer governance framework you can implement this month.

Let me walk you through each one.


3 Layers of Data Control

You are going to build three layers of control. Each layer protects you from different risks. And together, they create a system that you can explain to your auditors.



Layer 1: Vendor Controls (Who Gets on the List?)

First, you need to decide which AI tools you want to approve for sensitive data.

Here's what you're looking for:

  • SOC 2 Type II certification (proves they have data security controls and they've been audited)
  • GDPR compliance documentation (even if you're not in Europe, this shows they take data privacy seriously)
  • HIPAA compliance for healthcare organizations (required if you handle any patient data or medical billing)
  • A Business Associate Agreement or similar contract that specifies data retention, deletion rights, and prohibits using your data for model training

Tools that meet this standard today:

  • Microsoft Copilot M365 Licenses
  • ChatGPT Business and Enterprise Licenses
  • Gemini with Google Workspace Licenses
  • Claude Enterprise Licenses

Tools that don't make the cut for sensitive data:

  • Free versions of any AI chatbot
  • Consumer-grade AI tools without enterprise agreements
  • Browser extensions that send data to third-party servers

Create a simple approved tools list. Share it with your team. Update it quarterly.

Even better if you select one tool only. Then your team does not have to think.


Layer 2: Use Controls (What Data Goes Where?)

Now you need to define what can go into each tool tier.

I worked with a large enterprise group on exactly this problem. They built a three-tier data classification system that works like this:

Tier 1 - Public Data: Can use free AI tools. Examples: Published financial statements, press releases, public website content, general industry research.

Important - I do not recommend free tools. The only free tool that is acceptable is Microsoft Copilot when logged in using your Microsoft 365 account (look for the green security badge)

Tier 2 - Internal Data: Requires secure AI tools only. Examples: Budget forecasts, variance analysis, departmental spending, non-sensitive operational metrics.

Tier 3 - Confidential Data: Requires approved tools plus additional controls (such as data masking or aggregation before input). Examples: pre-announcement earnings, M&A models, customer contracts, employee compensation, anything with PII (personally identifiable information, e.g., email addresses, government IDs, HR records), and audit workpapers.

The key is to make this practical. Your team needs to be able to look at a piece of data and know which tier it is.

Here's what this looks like in practice:

  • "I want to analyze our monthly expense trends" → Tier 2 → Use approved enterprise tool
  • "I want to forecast Q1 revenue by customer" → Tier 3 → Use approved tool, but anonymize customer names first or aggregate to regional level
  • "I want to understand accounting treatment for leases" → Tier 1 → Free tool is fine, you're just researching IFRS 16


Layer 3: Output Controls (Trust But Verify)

How do you make sure your AI outputs are accurate before they matter?

This is where a lot of companies miss the point. They think "AI gives me the answer, I use it." This is the wrong mindset.

AI helps you build your analysis faster. But you still need human judgment before that analysis influences decisions or goes into official records.

Here's the control framework:

For any AI-generated output that will be used in:

  • Financial statements → Requires a FP&A manager or controller review and sign-off
  • Board materials → Requires a VP Finance or CFO review
  • Audit workpapers → Requires that a preparer reviews it, then a senior audit review (same as manual work)
  • External communications → Requires the same legal and financial approval as your current process.

Coming back to the Financial Reporting Council's (FRC) guidance on this (UK regulator responsible for corporate reporting, audit, and governance): AI can be a tool in the audit or financial reporting process, but it doesn't replace professional judgment. Human accountability remains.

One practical way to enforce this: create an "AI-Assisted" flag in your workpapers. If you’ve used AI anywhere in your analysis, mark it. Then, the reviewer knows to pay extra attention to the methodology and spot-check calculations.

The One-Page Summary for Your Audit Committee

Here's what you need to communicate:

Create a simple one-pager with four sections:

  1. What We're Protecting: List your Tier 3 data categories
  2. What Tools We're Using: Your approved vendor list with certifications
  3. What Controls Exist: Your three-layer framework in 3-4 bullet points
  4. Who's Accountable: Name the person responsible for maintaining the policy (usually Controller or VP Finance)

This document does two things. It shows you have a plan, and it gives your audit committee something concrete to review.

This isn't perfect. But it is a lot better than "ignore it and hope."


The Bottom Line

Your team is already using AI tools. The question isn't whether to allow it. The question is whether you're going to control it.

Most leaders are stuck in "deny and hope" mode. They know shadow IT is happening. They just don't want to deal with it. At the same time, sensitive financial data is flowing to servers they don't control, and auditors are starting to ask questions about AI governance in financial reporting.

The solution isn't a ban. It's controlled adoption.

Here's the three-layer framework summarized:

  1. Vendor controls: Approve the tools with proper certifications (SOC 2, GDPR, HIPAA)
  2. Use controls: Match your data sensitivity to your tool tier
  3. Output controls: Have a human review before the AI enters official records

This takes two hours to set up and one page to document.

Resource to get you started

For the complete AI tool security comparison that's taken me and my team months to create and update. Join the 1,521 other members of my AI Finance Club and receive instant access.

Useful Links

  1. Financial Reporting Council’s (FRC) Guidance
  2. Anthropic Trust Centre
  3. Google Cloud Trust Centre
  4. OpenAI Trust Center
  5. Microsoft Service Trust Portal

Your Move

Start this week. Pick one area where you know your team is using AI tools. Maybe it's variance analysis. Maybe it's vendor reconciliation.

Audit what's happening. Then apply the three-layer framework to this one process.

You don't need to solve everything at once. You just need to stop pretending it's not happening.

Because here's what happens when you get this right.

Your team moves faster, your data stays protected, and your audit and compliance become much easier.


Best,

Your AI Finance Expert,

Nicolas

P.S. - What did you think of this approach? Hit reply and let me know if you're planning to try this for your team (I read all replies).

P.P.S. - Want to learn how to become an AI CFO?

Join our FREE masterclass where I'll show you how to go from a traditional CFO to an AI CFO with super practical use cases you can start using straight away.

Don't miss this 60 minute free masterclass with me.

Spaces are limited so click the link below now to reserve your place.

Disclaimer: The content in this newsletter is for general information only and does not constitute legal or professional advice. AI Finance Club GmbH and Nicolas Boucher make no guarantees as to the accuracy or completeness of any information, and you use it at your own risk. Laws and requirements vary by country, industry, and situation, so you should consult a qualified professional before acting on any information here. To the fullest extent permitted by law, we accept no liability for any loss or damage arising from use of this newsletter or its contents.

Tips & Insights on Finance & AI

Join 270,000+ Professionals and receive the best insights about Finance & AI. More than 1 million people follow me on social media. Join us today and get 5 goodies from me!

Read more from Tips & Insights on Finance & AI
My AI twin saves me 100 mins/week

Sponsored by Agicap Want to learn how to become an AI CFO? Join our FREE masterclass where I'll show you how to go from a traditional CFO to an AI CFO with super practical use cases you can start using straight away. Don't miss this 60 minute free masterclass with me. Spaces are limited so click the link below now to reserve your place. Reserve Your FREE Masterclass Seat Here Bonjour ;) Let me ask you this. How many times this week did you ask ChatGPT something, get back a response, and then...

3-hr-decks in 10-min here's how

Sponsored by Agicap Want to learn how to become an AI CFO? Join our FREE masterclass where I'll show you how to go from a traditional CFO to an AI CFO with super practical use cases you can start using straight away. Don't miss this 60 minute free masterclass with me. Spaces are limited so click the link below now to reserve your place. Reserve Your FREE Masterclass Seat Here Hello hello! So, you’ve finally got your models built, and your numbers are accurate. But now you are dreading the 3...

AI broke accuracy, here's the fix

Sponsored by SAP Staying up to date with SAP was always a big advantage for me in my career. And learning how finance teams are using AI and data to lead decision making will put you ahead. To make sure you don’t miss out on expert insights you can use in your work and to advance your career. Join SAP’s finance webinar series now by clicking the button below: Reserve your FREE spot here Hello hello! Tell me, do you spend a lot of time reviewing the work of your team, because you’re worried AI...