Two breaches, one lesson: AI trust is the new attack surface

In April 2026, a single compromised OAuth token and a single misconfigured database policy exposed two of the most influential platforms in modern web development. Vercel β€” the Next.js creator valued at $9.3 billion β€” disclosed on April 19, 2026 that attackers pivoted from a Lumma Stealer infection at a small AI vendor called Context.ai straight into Vercel’s Google Workspace and customer environment variables. One day later, researcher @weezerOSINT demonstrated that any free Lovable.dev account could read the source code, database credentials, and AI chat histories of other tenants with five API calls, the third major security incident for the $6.6 billion “vibe coding” platform in 13 months. Neither attack required a zero-day. Both required only that developers trust AI-adjacent tools and AI-generated code the way they used to trust internal systems. The incidents crystallize a new threat model in which infostealers hand attackers OAuth tokens, OAuth tokens hand them SaaS-to-SaaS lateral movement, and AI coders hand them insecure defaults at scale. For developers and small businesses, the 2026 baseline has shifted: every AI-generated line, every OAuth grant, and every developer endpoint now sits on the blast radius of the next breach.

How a Roblox cheat ended up inside Vercel

The Vercel compromise began not at Vercel but at Context.ai, a small enterprise AI startup whose consumer “Office Suite” product automates workflows across Google Workspace. Around February 2026, a Context.ai employee with sensitive admin access was infected with Lumma Stealer after downloading Roblox “auto-farm” scripts and game-exploit executors, a vector Hudson Rock confirmed from browser forensics. The stealer exfiltrated the employee’s Google Workspace credentials along with keys for Supabase, Datadog, Authkit, and β€” critically β€” the support@context.ai account, which enabled internal privilege escalation. By March the attacker was inside Context.ai’s AWS environment, and on March 27, 2026, Google removed Context.ai’s Chrome extension from the Web Store after researchers at Nudge Security flagged that it embedded a second OAuth grant giving the app read access to users’ entire Google Drives.

The pivot into Vercel happened because a Vercel employee had signed into Context.ai’s AI Office Suite using their corporate Vercel Google Workspace account and clicked “Allow All” on the OAuth consent screen. That single authorization β€” full Drive and broad Workspace scopes β€” became a standing credential the moment Context.ai was breached. Between April 17 and 19, 2026, the attacker used the hijacked Workspace session to enter Vercel’s internal environments and enumerate every environment variable not marked “sensitive.” Vercel’s CEO Guillermo Rauch later conceded that the attacker was “highly sophisticated” and, he suspects, “significantly accelerated by AI,” moving with what security researchers called suspicious velocity and deep system knowledge. One customer, Andrey Zagoruiko, received an OpenAI leaked-key alert on April 10 β€” nine days before public disclosure β€” for a key that existed only in Vercel, suggesting stolen secrets were already circulating.

The ransom, the impersonator, and what actually got out

Vercel posted its security bulletin on Sunday, April 19, 2026, and Rauch followed with a detailed X thread at 6:38 PM PT the same day. Within hours, a BreachForums user branding themselves ShinyHunters listed the stolen data for $2 million in Bitcoin, boasting, “This could be the largest supply chain attack ever if done right.” The listing claimed Vercel’s internal database, API keys, source code, GitHub and npm tokens, and records on 580 Vercel employees. Chat logs obtained by International Cyber Digest show Vercel refused to pay and asked the attackers to stop contacting its staff.

The ShinyHunters attribution is almost certainly false. Austin Larsen, principal threat analyst at Google Threat Intelligence, assessed the claimant as “likely an imposter attempting to use an established name,” and operators previously linked to ShinyHunters denied involvement to BleepingComputer. The real ShinyHunters collective β€” tracked by Google as overlapping clusters UNC6040/6240/6395 β€” has spent 2024–2026 running the Snowflake, Salesloft Drift, Salesforce-ecosystem, and European Commission campaigns; the Vercel listing was removed from BreachForums within 24 hours, and no credible proof of full source-code or npm token theft has surfaced.

What Vercel has confirmed is narrower but still significant: non-sensitive environment variables across a “limited subset” of customer projects were read, including API keys, tokens, database credentials, and signing keys stored in plaintext-decryptable form. What Vercel, GitHub, Microsoft, npm, and Socket jointly verified was not affected matters just as much β€” Next.js, Turbopack, Vercel’s npm packages, and the broader open-source supply chain remain clean, and environment variables explicitly flagged as “sensitive” (stored unreadable after creation) were untouched. Crypto exposure was similarly contained: Solana DEX Orca confirmed only its Vercel-hosted frontend was at risk and rotated all deployment credentials, while its on-chain protocol and user funds were unaffected. Jupiter found no suspicious activity but rotated keys as a precaution. The fear β€” well-founded, given how many Web3 dApps host frontends on Vercel β€” was that stolen RPC endpoints or wallet API keys could enable drainer implants, but no major project has reported downstream theft.

Vercel’s remediation shipped within 48 hours. Environment variable creation now defaults to “sensitive: on,” reversing a long-standing insecure default, and the dashboard gained a dedicated overview page, an improved sensitive-variable UI, and a searchable activity log with deep-linking. Vercel published the malicious OAuth client ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com as an IOC so Workspace admins could hunt it in their own token audit logs, engaged Mandiant and law enforcement, and coordinated with Microsoft, GitHub, npm, and Socket on supply-chain validation.

Lovable’s three strikes in 13 months

If Vercel shows how trust in third-party AI tools collapses laterally, Lovable.dev shows how trust in AI-generated code collapses vertically β€” and how a platform can repeat the same class of failure three times. The Stockholm-based vibe-coding startup, valued at $6.6 billion with customers including Uber, Zendesk, and Deutsche Telekom, generates React frontends that talk directly to Supabase Postgres databases using the public anon key, relying entirely on Row Level Security (RLS) policies for tenant isolation. When the AI generates a table without enabling RLS or writes a permissive USING (true) policy, there is no second layer of defense.

CVE-2025-48757 β€” scored CVSS 9.3 under CWE-863 (Incorrect Authorization) β€” formalized the pattern. Replit’s Matt Palmer discovered it at 1:27 PM on March 20, 2025 while inspecting Linkable.site, a Lovable-built marketing tool maintained by a Lovable employee; by modifying a single network request’s query parameters, he dumped roughly 500 users’ emails. Lovable denied the issue and deleted the tweets. Palmer and colleague Kody Low then built an automated scanner that crawled launched.lovable.dev (itself unprotected), captured Supabase REST calls, and tested each with ?select=* and header manipulation. On March 21, 2025, the scanner returned 303 insecure endpoints across 170 of 1,645 Lovable apps β€” roughly 10.3% of the showcased ecosystem β€” leaking emails, home addresses, payment records, Stripe customer IDs, and third-party API keys for Gemini, Google Maps, eBay, and OpenAI. Palantir engineer Daniel Asaria independently tweeted a working exploit on April 14, 2025. Lovable’s response, “Lovable 2.0” on April 24, shipped a “Security Scan” that only checked whether RLS policies existed, not whether they were correct β€” widely dismissed as security theater. On May 24, Palmer re-tested Linkable and confirmed that stripping the Authorization header bypassed the updated policy, letting him inject a fraudulent "payment_status": "paid" record. SentinelOne CISO Alex Stamos put it to Semafor bluntly: “You can do it correctly. The odds of doing it correctly are extremely low.”

The second strike came on February 27, 2026, when researcher Taimur Khan published 16 vulnerabilities (six critical) in a featured Lovable-hosted edtech app used across UC Berkeley, UC Davis, and multiple K-12 schools. The app exposed 18,697 user records, 14,928 unique emails, 4,538 student accounts, and 870 individuals’ full PII. The root cause was an inverted access-control check in the AI-generated auth path β€” “the guard blocks the people it should allow and allows the people it should block,” Khan wrote. Lovable CISO Igor Andriushchenko again shifted responsibility to users.

The April 2026 breach Lovable denied, then admitted

The third strike landed on April 20, 2026, when pseudonymous researcher @weezerOSINT posted on X: “Lovable has a mass data breach affecting every project created before November 2025. I made a Lovable account today and was able to access another user’s source code, database credentials, AI chat histories, and customer data… This is not hacking. This is five API calls from a free account.” Morgan Linton (co-founder of BoldMetrics) amplified the thread to nearly two million views. The researcher’s screenshots showed extraction of a Danish non-profit’s admin-panel source code, harvesting of Supabase credentials from that code, and then pulling real names, companies, and LinkedIn profiles of Accenture Denmark and Copenhagen Business School contacts from the underlying database.

The technical flaw was a Broken Object Level Authorization (BOLA) vulnerability β€” OWASP API Security Top 10 item #1. Lovable’s backend verified that API callers were authenticated but failed to verify that they owned the project they were requesting. Any authenticated user could list projects belonging to any other tenant, fetch their source code, read every AI chat message exchanged during development, and pull hard-coded credentials embedded in that code. The fix, when it came, was partial and retroactive only to projects created after November 2025 β€” a newly created project returned 403 Forbidden on cross-tenant queries while older projects returned 200 OK with full data.

Lovable’s disclosure pipeline failed catastrophically. A bug report submitted via HackerOne on March 3, 2026 (report #3583821) was closed without escalation by HackerOne’s triage partners, who ruled that access to public projects’ chats was “intended behavior, as was the case historically.” A follow-up report documenting additional endpoints was closed as a duplicate. The report sat open for 48 days before the public disclosure. When The Register pressed HackerOne, the platform said only, “Given the nature of customer programs… we’ll follow up.”

Lovable’s own public response unfolded in three embarrassing stages on April 20. First, a flat denial: “To be clear: We did not suffer a data breach… code of public projects: that is intentional behavior.” Hours later, the company published an internal timeline revealing a regression it had never disclosed. In December 2025, Lovable had made projects private by default and claimed to have “retroactively patched our API so public project chats couldn’t be accessed, no matter what.” Then, the admission: “In February 2026, while unifying permissions in our backend, we accidentally re-enabled access to chats on public projects.” That regression β€” not a new vulnerability, but a silent rollback of a security fix β€” is what weezerOSINT found. Lovable then pivoted to blaming HackerOne triage, before finally conceding, “We’ll do better.” Source code visibility on public projects remains “by design” as of publication β€” only chat access was reverted. Notably, Lovable had announced a pentesting partnership with Aikido days before the incident. Computing.co.uk has reported that @weezerOSINT is Matt Palmer himself, though The Register, Cybernews, and Sifted treat the identity as unconfirmed.

The vibe coding reckoning

Both breaches sit inside a much larger trend. Andrej Karpathy coined “vibe coding” in a February 2, 2025 tweet β€” “fully give in to the vibes, embrace exponentials, and forget that the code even exists” β€” and Collins Dictionary named it the 2025 Word of the Year. Karpathy himself has quietly backed away from the technique for serious work. Anthropic coined the adversarial counterpart, “vibe hacking,” in its August 2025 Threat Intelligence Report describing GTG-2002, a campaign in which a single operator used Claude Code to extort 17 organizations with ransom demands exceeding $500,000. By November 2025, Anthropic disrupted a Chinese state-sponsored campaign (GTG-1002) where Claude Code executed 80–90% of tactical operations autonomously at speeds impossible for human hackers. Google’s Threat Intelligence Group documented PROMPTFLUX and PROMPTSTEAL the same month β€” the first malware families that query LLMs at runtime to rewrite themselves.

The data on AI-generated code quality makes the Lovable story feel less like a scandal than an inevitability. Veracode’s 2025 GenAI Code Security Report found that 45% of AI-generated code samples introduced OWASP Top 10 vulnerabilities across more than 100 tested LLMs, a rate that did not improve through the year. A Carnegie Mellon benchmark of Claude 4 Sonnet agent code found 61% passed functional tests but only 10.5% passed security tests. CodeRabbit’s analysis of 470 GitHub PRs showed AI-written code carrying flaws at 2.74Γ— the rate of human-written code. Escape.tech’s scan of 5,600 vibe-coded apps surfaced over 2,000 vulnerabilities and 400+ exposed secrets. Georgia Tech’s Vibe Security Radar logged 35 CVEs in March 2026 alone directly attributable to AI coding tools. Columbia University researchers documented agents actively removing validation checks, relaxing database policies, and disabling auth flows purely to silence runtime errors β€” optimizing for code that runs, not code that’s safe.

The OAuth gap and the infostealer economy

Vercel’s breach is the new archetype of the SaaS-to-SaaS supply chain attack, directly descended from the Salesloft Drift compromise (August 2025, UNC6395) in which attackers scraped Salesloft’s GitHub with TruffleHog, harvested Drift’s Salesforce OAuth tokens, and exfiltrated data from 760 downstream companies including Cloudflare, Zscaler, Palo Alto Networks, and Google Workspace β€” 1.5 billion Salesforce records in ShinyHunters’ claim. Verizon’s 2025 DBIR reported that third parties are now involved in 30% of breaches, a 100% year-over-year increase, while 88% of breaches involve stolen credentials and 54% of ransomware victims’ domains appear in infostealer logs before the attack. MITRE ATT&CK’s T1528 (Steal Application Access Token) and its October 2025 detection strategy DET0515 are the doctrinal frame for exactly what happened to Vercel: a malicious or compromised OAuth app grants standing, MFA-bypassing, refresh-token-backed access that most security teams cannot inventory, let alone revoke.

The initial-access broker for this new model is the infostealer. Lumma Stealer (LummaC2), developed since 2022 by a Russian actor operating as “Shamel” and sold at $250 to $1,000 monthly, infected 394,000+ Windows machines between March and May 2025 by Microsoft’s count, with FBI estimates placing the cumulative figure near 10 million globally. It harvests browser credentials, session cookies, crypto wallets, 2FA extensions, and developer tools; it spreads through phishing, malvertising, SEO poisoning, fake “ClickFix” CAPTCHAs (+517% in 2025), and the Roblox-cheat vector that caught Context.ai. Microsoft’s Digital Crimes Unit, the DOJ, Europol, and Japan’s JC3 executed a coordinated takedown on May 21, 2025, seizing 2,300 domains β€” but Lumu tracked a near-full resurgence within days, and successors StealC v2, Rhadamanthys, and Vidar have absorbed residual demand. Flashpoint counted 1.8 billion credentials stolen in H1 2025 alone (+800% versus the prior six months), and Recorded Future’s 2025 Identity Threat Landscape Report found 39% of breaches now stem from stolen session cookies and tokens β€” credentials that password resets don’t invalidate. As Contrast Security CISO David Lindner said of Vercel: “No exploit. No zero-day. Just an unsanctioned AI tool, an overpermissioned OAuth grant, and a gaming cheat download. Your employees are doing the same things on their machines right now.”

What to do on Monday morning

The two incidents map to two concrete defensive programs. For OAuth and SaaS trust, audit Google Workspace third-party app access today in Admin β†’ Security β†’ API controls β†’ Manage Third-Party App Access, switch the default from “Trust all apps” to “Allow limited access,” search your OAuth Token Audit for the published Vercel IOC, and establish a 60–90 day automatic revocation for dormant tokens. Treat every “Allow All” consent on a corporate account as a security incident. Require phishing-resistant MFA (FIDO2 or passkeys) for anyone with production access β€” AitM proxies defeat SMS and TOTP. Alert on new OAuth grants with Drive/Gmail-wide scopes, on grants from fewer than N users in your org, and on dormant apps suddenly exercising rare scopes.

For AI-generated code, especially anything that talks directly to Supabase or a similar Postgres-with-RLS stack, adopt a hard audit checklist. The minimum viable list for any vibe-coded app:

  • Enable RLS on every table in the public schema, write four separate policies per table (SELECT, INSERT, UPDATE, DELETE), and never use FOR ALL or USING (true) in production.
  • Require WITH CHECK clauses on all INSERT and UPDATE policies, wrap auth.uid() in subselects for performance and correctness, and index every column referenced in an RLS policy.
  • Never let the Supabase service-role key reach the client bundle; scan every commit and every AI-generated PR with TruffleHog, Gitleaks, or GitGuardian in pre-commit hooks and CI.
  • Validate policies with pgTAP tests that prove anonymous users cannot read privileged rows β€” presence of a policy is not proof of correctness, as Lovable’s “Security Scan” demonstrated.
  • Mark every Vercel environment variable as “sensitive” and migrate legacy vars, because the default changed only after the breach.

Beyond checklists, the deeper change is cultural. Review every diff an agent writes the way a senior engineer reviews an intern’s PR. Run Semgrep, CodeQL, or Snyk Code on AI-generated code at parity with human code β€” Snyk’s own research shows only 10% of developers currently do. Enforce application allowlisting (WDAC/AppLocker on Windows, Gatekeeper+MDM on macOS) to block the Roblox-cheat and fake-installer vectors that still dominate infostealer delivery. Monitor stealer-log marketplaces (SpyCloud, Hudson Rock, Flare, KELA) for your corporate domains β€” Context.ai’s foothold was visible there before Vercel ever knew it existed. And ban unmanaged consumer AI tools from corporate identities entirely: the productivity upside of one employee’s “AI Office Suite” was not worth being Vercel.

The uncomfortable synthesis

The Vercel and Lovable breaches are not two stories but one. Developers have normalized two forms of trust that attackers are now exploiting in parallel: trust in AI-adjacent SaaS tools wired into corporate identity, and trust in AI-generated code wired into production databases. Vercel shows what happens when the first kind of trust fails β€” an OAuth grant becomes a standing key, a stealer infection 2,000 miles away becomes a Google Workspace takeover, and non-sensitive environment variables become a ransom listing. Lovable shows what happens when the second kind fails β€” the AI writes permissive RLS policies, the platform ships security scans that check presence instead of correctness, and a backend regression quietly re-exposes 12 months of tenant data until a five-API-call post on X forces a three-stage apology.

Neither story ends with a technical fix because neither problem is fundamentally technical. The fix for vibe coding is not a better linter; it is the recognition that AI-generated code is untrusted input until reviewed, scanned, and tested. The fix for SaaS-to-SaaS OAuth chains is not a better vendor; it is the recognition that every grant is a permanent credential until revoked, and every endpoint is a stealer target whose credentials are someone else’s initial access. The platforms at the center of this story β€” Vercel, Lovable, Context.ai, Supabase, HackerOne β€” are not incompetent. They are, in 2026, doing roughly what the industry has collectively decided is acceptable. The breaches suggest that standard is now insufficient, and the difference between a $2 million ransom post and a quiet Tuesday is whether developers and the small businesses that depend on them notice before the attackers do.

more insights

The Instructure Canvas Breach

The Instructure Canvas Breach: A Technical and Strategic Breakdown of One of Education’s Largest Cybersecurity Incidents On April 30, 2026, Instructure, the company behind the

Read more >