Incident lesson // Shadow AI and OAuth sprawl

The breach was not the AI tool. It was the trust.

Everyone is nervous about staff pasting business data into AI tools. Fair. But the quieter problem is stranger, stickier and more dangerous: an employee clicks “Allow”, grants an app access to your systems, forgets about it, and leaves a permanent bridge behind.

Read the briefing
The uncomfortable bit

Shadow AI is not just a chatbot problem.

The lazy version of the conversation is simple: stop people uploading sensitive data into random AI tools. That is not wrong. It is just incomplete.

The real issue starts when an AI app does not just receive data, but connects itself into your business. Google Workspace. Microsoft 365. Salesforce. GitHub. Slack. Your CRM. Your helpdesk. Your file store.

At that point the tool is no longer just another website. It becomes a trusted participant in your environment. It may be able to read files, inspect email, pull customer records, create reports, access dashboards or hold long-lived tokens.

That is the move. Not malware screaming through the network. Not a hoodie in a basement. Just a normal looking consent screen, a useful productivity promise, and a quiet transfer of trust.

In the Vercel incident reported by BleepingComputer, the key lesson was that a Vercel employee had trialled an AI Office Suite product from Context.ai and granted it OAuth access to their Google Workspace account. When Context.ai was later compromised, OAuth tokens in that third-party environment became a route into downstream customer accounts.

The detail business owners should care about is not the brand name. It is the pattern. A tool can be tested once, lightly used, forgotten, and still remain connected to systems that matter.

Four ghosts in the browser

The shadow stack your business may already be running.

Click each card. On mobile, each card opens in place. This is what shadow AI looks like when you stop thinking about it as “staff using tools” and start thinking about it as unmanaged trust.

01

Shadow apps

AI tools used for business work without approval, visibility or a clear owner.

unmanaged SaaS
02

Shadow tenants

Approved tools used in unapproved spaces, personal accounts or stray workspaces.

control gap
03

Shadow extensions

Helpful browser add-ons that may see sensitive activity across tabs and pages.

browser surface
04

Shadow integrations

OAuth grants that let one app read or act inside another business platform.

delegated trust

Shadow apps: Apps employees sign up to with work details, often because the official route is too slow. The risk is not curiosity. The risk is no owner, no review and no offboarding.

trust-path.simulation
1
Employee trials AI appUseful tool. Quick signup. Corporate login.
low friction
2
OAuth consent grantedRead files. Access email. Pull identity data.
trust bridge
3
App is forgottenNo contract. No review. No renewal conversation.
stale access
4
Third party is compromisedTokens stored outside your environment become useful.
supply chain
5
Attacker pivots downstreamYour data is accessed through permission you already approved.
impact

ready. press replay sequence to animate the delegated trust path.

Why business owners should care

This is a governance issue wearing a technical mask.

The board does not need to memorise every OAuth scope. But leadership does need to understand the business decision hidden inside the consent screen.

When someone connects an app into core business systems, they are making that vendor part of your trust chain. Their security becomes your dependency. Their staff, their tokens, their logs, their browser hygiene, their incident response.

That is fine when the vendor is known, approved, reviewed and monitored. It is not fine when it happened on a Tuesday afternoon because a trial tool promised to summarise meetings nicely.

Security risk is now procurement risk, workflow risk and reputation risk.

AI adoption is moving faster than normal approval processes. That is the business reality. Staff are not always trying to bypass security. Often they are trying to remove friction and get work done.

A
Productivity creates pressure.

When teams are under pressure, they will find the quickest tool that helps. Security has to meet them where work happens.

B
Consent screens hide business impact.

“Allow this app” can mean “let this vendor touch our customer data, internal files and privileged workflows.”

C
Forgotten access is still access.

The fact nobody remembers the tool does not mean the connection is gone.

The practical signals are there if you collect them.

This is not about banning every AI tool. It is about seeing where the trust relationships exist, deciding which ones are acceptable, and removing the ones that are not.

01
OAuth grants and scopes.

Audit which apps have access, who approved them, what scopes they hold, and whether the access is still needed.

02
Browser extension inventory.

Review AI extensions and other add-ons with page access, form access or broad host permissions.

03
SaaS-to-SaaS connections.

Look beyond Microsoft and Google. CRM, ticketing, automation, HR, marketing and finance tools all become part of the mesh.

04
Detection and response.

Monitor new consent events, unusual token usage, app-only access, risky sign-ins and unexpected data export behaviour.

Simple questions expose the gap quickly.

You do not need to start with a giant transformation programme. Start by asking questions that force the hidden estate into the open.

?
Who can approve new apps?

Can normal users grant OAuth access without admin review?

?
What is currently connected?

Can we list every third-party app connected to our core systems?

?
What has high-risk access?

Which integrations can read mail, files, identity, customer records, API keys or developer platforms?

?
Who removes access?

When a tool is no longer used, who revokes it, and how often is that checked?

Controls that actually reduce risk

Do not fight AI adoption blindfolded.

The goal is not to become the “no” department. The goal is to build a safer route for useful tools, and a clear block for anything that creates reckless trust.

Default-deny OAuth consent

Stop normal users from approving new high-risk integrations without admin review. This single change removes a lot of casual exposure.

Build an app inventory

Know which SaaS and AI tools are being used, who owns them, what data they touch and whether they are business approved.

Review scopes, not just names

A friendly app name means very little. Look at permissions. Mail, files, identity, CRM data and developer tokens deserve special attention.

Revoke stale integrations

Unused apps should not remain trusted forever. Set a routine review cycle and remove anything that has no owner or business purpose.

Manage browser extensions

AI extensions and productivity add-ons can sit close to the session layer. Inventory them, restrict them and block risky permissions.

Monitor consent events

Alert on new OAuth grants, unusual token usage, risky sign-ins and unexpected API access. Detection matters because prevention will never be perfect.

Two-minute business self-check

How exposed is your delegated trust layer?

Tick what you genuinely have in place. This is not a compliance exercise. It is a quick reality check.

Once enough signals look normal, people stop looking for the one signal that is not.

The leadership takeaway

This is where the security conversation needs to mature. The question is not “are staff using AI?” They are, or they will. The better question is:

Which AI tools have become trusted parts of our business systems without anyone making a deliberate business decision?

That question usually changes the room. Because now we are not talking about hype. We are talking about access, assurance, supplier risk and operational control.

Common objections

Good questions. Slightly uncomfortable answers.

Should we ban AI tools completely?

Usually, no. Blanket bans tend to drive usage underground. A safer approach is to approve useful tools, restrict dangerous access, educate staff and monitor what is actually happening.

Is this only a big company problem?

No. Smaller businesses often have less internal visibility, fewer formal approvals and more reliance on cloud tools. That can make OAuth sprawl easier to miss.

Is MFA enough?

MFA is important, but OAuth tokens and app permissions can bypass the mental model many people have of “logging in”. Once access is granted, the app may act through approved permissions until that access is revoked.

What is the first thing to check?

Start with your main identity and productivity platform. Review third-party app access in Google Workspace or Microsoft 365, identify high-risk permissions, and disable user consent for risky integrations unless reviewed.

313SEC support

We help businesses find the side doors before someone else walks through them.

313SEC can review your SaaS and AI exposure, map risky OAuth grants, check browser extension risk, look at identity controls and turn the findings into a practical action plan. No theatre. No fear deck. Just the parts that matter and the order to fix them in.