On 19 April 2026, Vercel disclosed a security incident. Within 48 hours, the public attack chain had resolved into something more interesting than the initial "cloud platform breach" framing: an infostealer infection at one AI startup, an OAuth token exfiltrated through a consumer productivity tool, a Google Workspace compromise at a second company, and — potentially — blast radius across hundreds of organisations none of whom had ever heard of the initial victim.
This post walks through what actually happened, who is downstream of whom, and why it's a textbook argument for running AI inference on infrastructure you actually control.
The attack chain
Here's the sequence as it's currently understood from Vercel's bulletin, Context's security statement, Guillermo Rauch's follow-up comments, and Hudson Rock's infostealer analysis:
- Infostealer infection on a Context.ai machine. Hudson Rock identified credentials in infostealer logs with direct access to Context's own Vercel project administrative endpoints, including the environment variable management page for
context-inc/valinor. - Attacker accesses Context's AWS environment. Context detected and stopped the intrusion "last month" (mid-March 2026), engaged CrowdStrike, notified a single customer they initially identified as impacted, and shut down the entire consumer AWS environment.
- OAuth tokens for Context consumer users exfiltrated. This part was not identified in the original investigation. Context only learned about it on 19 April, based on information provided by Vercel. The tokens were taken before the AWS environment was shut down.
- Attacker uses a Context consumer OAuth token to access a Vercel employee's Google Workspace. The employee had signed up for Context's consumer AI Office Suite using their Vercel enterprise email and granted "Allow All" permissions to the integration.
- Pivot from Workspace into Vercel internal environments. Non-sensitive environment variables were enumerated. Environment variables marked "sensitive" remained encrypted at rest and are not believed to have been accessed.
- Someone on BreachForums, claiming to be ShinyHunters, lists the data for sale at $2M. Actors linked to recent ShinyHunters attacks have denied involvement to BleepingComputer, which is consistent with the broader identity contest around that brand since the October 2025 FBI seizure of their original forum infrastructure.
The indicator of compromise Vercel published — OAuth Client ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com — is the hunt artefact any Google Workspace administrator should run through their admin console today, whether or not they think they use Context.
The part that matters for everyone else
Vercel got the headline because Vercel disclosed first and has a louder brand. The underlying incident is bigger than Vercel.
Context's own statement acknowledges the OAuth token compromise "potentially" affected hundreds of users across many organisations. Every single one of those users granted "Allow All" or similar scopes to a consumer AI tool using their work Google account. Every single one of those Google Workspaces should be treated as compromised until proven otherwise. Context has since killed the consumer product entirely, which handles future risk but does nothing to recover tokens already exfiltrated.
This is the shape of modern supply chain risk. The downstream victims don't know they're downstream. The Vercel employee didn't know Context's AWS environment had been popped. Context's other consumer users didn't know their tokens had been taken until a peer victim's investigation surfaced it. Vercel's customers — Web3 projects with wallet API keys, financial frontends, SaaS dashboards — didn't know their hosting provider had an OAuth governance gap. The IR investigation required three companies to compare notes before the full picture emerged, and it took four weeks.
The framing problem
There's a quiet narrative fight going on in the public comms from both sides.
Vercel's CEO described the entry point as "an employee was compromised through the Context.ai AI platform being breached." That phrasing positions the incident as an individual choice by a specific employee.
Context's statement frames it differently: "Vercel is not a Context customer, but it appears at least one Vercel employee signed up for the AI Office Suite using their Vercel enterprise account and granted 'Allow All' permissions. Vercel's internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel's enterprise Google Workspace."
Both framings contain truth. Neither is complete. The honest version is that an employee at a tech company wanted AI assistance for building presentations and documents, signed up for a free consumer tool that advertised exactly that, and used their work Google account because that's the account they work in. The organisation's OAuth governance was permissive enough to allow it. The tool was in turn running on a consumer-grade architecture that got popped. None of these individual decisions are extraordinary; they're the default behaviour at every tech company running Google Workspace in 2026.
The structural failure is that the employee had no sanctioned internal AI option sitting on infrastructure the organisation controlled. Given the choice between a productivity gain and a policy violation, employees pick the productivity gain. They always have. Shadow IT became shadow AI with almost no friction because the surface area expanded from "download an app" to "sign in with Google to a web tool."
Where sovereign AI actually matters
It's easy to treat "sovereignty" as a compliance buzzword. The Vercel-Context-Workspace chain is the argument for why it's a structural security property.
Consider the same incident in a sovereign-inference scenario. An engineer at a hypothetical financial services firm wants AI assistance building a quarterly report. Instead of signing up for a consumer AI tool with their corporate email, they open a sanctioned internal application that runs inference against models hosted on the firm's own GPU cluster. No OAuth token is issued to an external party. No Google Workspace scope is granted outside the firm's identity provider. No prompts, documents, or generated content leave the firm's network boundary. If the model drifts, the firm's own MLOps team catches it. If the employee's endpoint is compromised, the blast radius is contained to that endpoint and whatever systems the firm itself owns.
The attack surface does not disappear. Phishing still exists. Infostealers still exist. Insider risk still exists. What disappears is the set of trust relationships to external parties who have no contractual obligation to protect you and whose security posture you have no visibility into. You trade commercial SaaS convenience for operational control, which is a trade-off that most organisations have deferred for a decade because the SaaS model was cheaper on day one.
The bill for that deferral is now arriving in the form of incidents like this one, and it's being paid by customers who had no say in the architecture that produced the exposure.
The DORA and NIS2 dimension
For anyone subject to DORA or NIS2 — which is most financial services, critical infrastructure, and digital service providers in the EU and UK — the Vercel-Context chain is a regulatory problem as well as a security one.
DORA's ICT third-party risk management provisions require firms to identify, assess, and manage the risks arising from their use of ICT third-party service providers. The regulation does not distinguish between "SaaS you procured" and "SaaS your employees signed up for with corporate credentials." If an employee's shadow AI usage creates a pathway for a regulator-relevant incident, the firm is answerable for it. "We didn't know they were using it" is not a defence under DORA's accountability framework, and it's an even weaker defence under NIS2 where senior management liability is explicit.
The practical implication is that every regulated firm now needs a credible answer to two questions:
- What is your inventory of OAuth grants against corporate Google Workspace and Microsoft 365 tenants?
- What is your sanctioned internal AI option that employees can use without reaching outside the boundary?
The first question is a detection and governance problem that can be solved with existing tooling. The second is an architectural problem that most firms have not yet solved, because "buy a SaaS AI product" has been the default answer for two years and is exactly what created the Vercel-Context attack surface.
What we're building at Adverse Trace
This is the context in which we've been building dAIffed — a sovereign AI platform for DFIR, OSINT, and compliance work, running on on-premise GB10 hardware with a PostgreSQL audit trail that the operating firm controls end to end.
The argument is not that sovereignty is a perfect defence. Nothing is. The argument is that when an incident happens, you want to be able to answer three questions without picking up the phone to a vendor's PR team: what was accessed, what was exfiltrated, and what remains trustworthy. Doing that requires the evidence to live on infrastructure you actually own, with logs you wrote, in a jurisdiction you chose. That is a structurally different posture to the one Vercel's customers find themselves in this week, where the investigation depends on three separate companies agreeing on a timeline.
For EMEA financial services firms working under DORA, for legal teams that need defensible evidence chains, for any organisation whose breach notification story needs to stand up in front of a regulator — the sovereign model is not about ideology. It's about being able to say, truthfully, that you are the one who controls the parts of the system that matter.
Recommended actions
If you operate a Google Workspace tenant — regardless of whether you're a Vercel customer, a Context customer, or neither:
Run the OAuth app inventory check in your Google Admin Console (Security > Access and Data Control > API Controls > Manage Third-Party App Access) and search for client ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. If present, revoke immediately and treat all accounts that granted the app as compromised.
Audit OAuth grant policies. If individual users can grant "Allow All" permissions to consumer apps without admin approval, that's the configuration gap that made this incident possible at Vercel. Move to admin-approval-required for anything above minimal scopes.
Review your shadow AI exposure. Expense reports, browser history on managed endpoints, SSO logs, and DNS logs will tell you which AI tools your employees are actually using. The answer is usually more than the list your CISO thinks it is.
If you're a Vercel customer, rotate every environment variable that was not marked "sensitive." Check deployment and audit logs for anomalous activity between the Context compromise window (mid-March 2026) and now.
If you're a regulated firm under DORA or NIS2, document all of the above. The audit trail for how you responded to this incident is itself a compliance artefact.
Written in the week of the disclosure, while details are still emerging. Timeline, scope, and attribution may evolve as Mandiant's investigation continues. Corrections welcome.