Vercel 2026 Breach: One Employee, One Extension, an Entire Company's Infrastructure

April 2026. A single Vercel employee was using an AI productivity tool — Context.ai — to optimize their workflow. Nothing unusual. Yet that tool, compromised a month earlier, became the entry point for an attack that exposed environment variables from Vercel projects, drew the attention of the ShinyHunters group, and resulted in a $2 million ransom demand. An invisible chain of dependencies, overly generous OAuth scopes, and a few carelessly stored tokens — that's all it took.
Attack Timeline: From Context.ai to ShinyHunters
- March 2026 — Attackers compromise Context.ai's infrastructure, a third-party AI tool used by dozens of development teams. The breach initially goes unnoticed.
- Week 1-2 of April — From Context.ai's database, attackers extract OAuth tokens stored in the platform's AWS infrastructure. Among them, a Google Workspace token belonging to a Vercel employee with elevated permissions.
- Week 2 of April — Using the Google Workspace token, attackers access the employee's account and pivot toward connected integrations, including Vercel access via SSO.
- Mid-April — Once inside the Vercel account, attackers iterate through accessible projects and read environment variables not explicitly marked as 'sensitive' — these lack the same level of encryption protection in the UI.
- Late April — ShinyHunters publicly claims the attack on underground forums, publishes a data sample, and demands $2 million for the deletion of exfiltrated data.
Technical Anatomy of the Breach: What the Attack Chain Looks Like
The breach wasn't a sophisticated zero-day attack. It was a chain of poor architectural decisions, methodically exploited. Context.ai, like any AI productivity tool, requested a set of Google OAuth scopes during authentication — calendar access, Gmail, Drive, sometimes Google Workspace Admin depending on the chosen plan. These tokens, after authentication, were stored server-side in the platform's AWS database to enable asynchronous features like automated summaries and scheduled actions. The problem: OAuth refresh tokens, if not individually encrypted or frequently rotated, remain valid for long periods. When Context.ai's AWS was compromised — either through a misconfigured S3 bucket or an application vulnerability — attackers obtained a dump of active tokens. Each token was, in essence, a digital key to a real user's account — no password, no 2FA required.
The pivot to Vercel was possible because the employee used Google SSO to authenticate into Vercel. Valid Google token = valid Vercel session. From that point, the attacker navigated the Vercel dashboard as a legitimate user. Vercel stores environment variables in two categories: those marked as 'sensitive' (end-to-end encrypted, invisible after saving) and standard ones (visible in the UI for members with access). Standard variables — API keys, connection strings, feature flags — were directly accessible. A simple script iterating through projects and calling the Vercel API with the session token was enough to exfiltrate hundreds of variables in minutes.
Why OAuth Isn't the Culprit — And Why the Distinction Matters
The first reaction of many developers to this type of incident is to blame OAuth. Wrong. The OAuth 2.0 protocol worked exactly as designed: it issued a token with the requested scopes, and the token was used to access authorized resources. No protocol vulnerability. The real problem has three distinct components: scope management by the third-party application (Context.ai requested more than necessary — the principle of least privilege violated), token storage by the vendor (refresh tokens should be individually encrypted with per-user derived keys, not stored in plaintext or with a single master key), and the absence of automatic revocation (a token unused for weeks should be automatically invalidated). As a developer integrating OAuth into your own applications, the lesson isn't to avoid OAuth — it's to understand that protocol security doesn't replace implementation security.
8 Vectors Through Which You Can Lose Your OAuth Token
- XSS (Cross-Site Scripting)
If you store the token in localStorage and your application has an XSS vulnerability, an attacker can exfiltrate the token with a single line of JavaScript. Vulnerable code: localStorage.setItem('token', accessToken) followed by any unescaped input rendered in the DOM. Safe code: store the token in an HttpOnly, SameSite=Strict, Secure cookie — inaccessible from JavaScript. Alternatively, use a BFF (Backend for Frontend) that maintains the token server-side and issues its own session cookies.
- Server Compromise
Exactly the Context.ai scenario. Tokens stored in the database are exposed if the server is compromised. Vulnerable code: db.tokens.insert({ userId, refreshToken }) — token in plaintext. Safe code: individually encrypt each token before storage using a key derived from the user's secret — const encrypted = encrypt(refreshToken, deriveKey(userSecret, userId)). Even if the database is exfiltrated, individual tokens remain unusable without the per-user keys.
- MITM (Man-in-the-Middle)
A MITM attack on an unprotected connection can intercept the token from the Authorization header. Vulnerable code: any HTTP request (without S) that includes Authorization: Bearer . Safe code: enforced HTTPS everywhere, HSTS with preload, Certificate Pinning in mobile apps. At the server level: Strict-Transport-Security: max-age=63072000; includeSubDomains; preload. Remember: HTTP→HTTPS redirects aren't sufficient — the first request can be intercepted.
- CSRF (Cross-Site Request Forgery)
Most relevant in the OAuth authorization flow: an attacker can inject a malicious authorization code if the state parameter isn't validated. Vulnerable code: ignoring the state parameter in the callback or generating it predictably (state=userId). Safe code: generate a cryptographically random state (crypto.randomBytes(32).toString('hex')), store it server-side in the session, and strictly validate it at callback. No exact match → reject the request, regardless of other parameters.
- Malware and Browser Extensions
Browser extensions with broad permissions can read cookies or localStorage from any tab. Local malware can dump tokens from browser memory. Mitigation: least-privilege principle for installed extensions, periodic audit of extensions with 'all sites' access, using separate work browsers from personal ones, and EDR (Endpoint Detection and Response) on machines accessing critical systems. At the application level: token binding and DPoP (Demonstrating Proof of Possession) tie the token to the specific client — even if stolen, it can't be used from a different context.
- Device Compromise
A compromised device means all tokens stored on it — in the keychain, browser profile, or config files — are exposed. Vulnerable code: storing tokens in plaintext files (~/.config/app/token) or in environment variables persisted in shell history. Safe code: use the system keychain (Keychain on macOS, Credential Manager on Windows, libsecret on Linux) for local secret storage. At the organizational level: MDM (Mobile Device Management), enforced disk encryption, and remote wipe capabilities for compromised devices.
- OAuth Phishing (Consent Phishing)
The attacker creates a malicious OAuth application with a convincing name ('Vercel Analytics Pro', 'GitHub Backup Tool') and persuades the user to grant consent. The app receives real tokens with the requested scopes. Detection: periodically audit authorized applications in your organization's Google/GitHub/Azure AD account. At the policy level: in Google Workspace Admin, you can restrict which third-party apps can request OAuth access (Admin Console → Security → API Controls → App Access Control). Explicitly whitelist approved apps, block the rest.
- Logs Leak
Tokens accidentally end up in logs when included in URLs (query parameters) or fully logged request bodies. Vulnerable code: console.log('Request:', req.url) when the URL contains ?access_token=xyz, or logging middleware that logs headers including Authorization. Safe code: never transmit tokens as query parameters (use the Authorization header), and explicitly filter sensitive fields in logging middleware — const sanitized = omit(headers, ['authorization', 'cookie', 'x-api-key']). Also verify that Sentry, Datadog, or other observability tools don't capture request bodies with sensitive data.
Conclusion: Security Isn't About Not Getting Breached
The Vercel 2026 breach isn't a story about a broken protocol or a genius hacker. It's a story about ignored attack surfaces and the illusion that a third-party vendor's security is their responsibility, not yours. As a developer, every tool you grant OAuth access to becomes an extension of your attack surface. Every token you store carelessly is a door you leave open.
The goal of security is not to be impossible to breach — it's that when you are breached, the damage is minimal, quickly detectable, and reversible. Least privilege, token rotation, sensitive marking, and audit logs are not optional features. They are the difference between a security incident and a business disaster.