Summary: Lovable, the $6.6 billion vibe coding platform with eight million users, has faced three documented security incidents exposing source code, database credentials, and thousands of user records, with the most recent BOLA vulnerability left open for 48 days after the company closed a bug bounty report without escalation. The incidents are representative of a structural problem across vibe coding: 40-62% of AI-generated code contains vulnerabilities, 91.5% of vibe-coded apps had at least one AI hallucination-related flaw in Q1 2026, and the market’s incentive structure rewards growth over security at a moment when 60% of all new code is projected to be AI-generated by year end.
Lovable, the vibe coding platform valued at $6.6 billion with eight million users, has spent the past two months dealing with security incidents that collectively exposed source code, database credentials, AI chat histories, and the personal data of thousands of users across projects built on its platform. The most recent disclosure, published on 20 April by a security researcher, revealed a broken object-level authorisation vulnerability in Lovable’s API that allowed anyone with a free account to access another user’s profile, public projects, source code, and database credentials in as few as five API calls. The researcher reported the flaw to Lovable’s bug bounty programme on 3 March. Lovable patched it for new projects but never fixed it for existing ones, marked a follow-up report as a duplicate, and closed it. As of reporting, the vulnerability had been open for 48 days.
Lovable’s response followed a pattern that security researchers found more telling than the vulnerability itself. The company first posted on X that it “did not suffer a data breach,” calling the exposed data “intentional behaviour.” It then blamed its own documentation, saying that what “public” implies “was unclear.” It then blamed its bug bounty partner HackerOne, saying reports were “closed without escalation because our HackerOne partners thought that seeing public projects’ chats was the intended behaviour.” Later that day, it issued a partial apology acknowledging that “pointing to documentation issues alone was not enough.” Cybernews headlined its coverage: “Lovable goes on ego trip denying vulnerability, then blames others for said vulnerability.”
What was exposed
The April incident affected projects created before November 2025. The researcher demonstrated that extracting a user’s source code from Lovable’s API also yielded hardcoded Supabase database credentials embedded in that code. One affected project belonged to Connected Women in AI, a Danish nonprofit. Its exposed data contained real user records including names, job titles, LinkedIn profiles, and Stripe customer IDs, with records linked to individuals at Accenture Denmark and Copenhagen Business School. Employees at Nvidia, Microsoft, Uber, and Spotify reportedly have Lovable accounts tied to affected projects.
This was the third documented security incident involving the platform. In February, a tech entrepreneur named Taimur Khan found 16 vulnerabilities, six of them critical, in a single app hosted on Lovable and featured on its own Discover page with more than 100,000 views. The most severe was an inverted authentication logic that granted anonymous users full access while blocking authenticated users. The app, an AI-powered EdTech tool, exposed 18,697 user records including 4,538 student accounts from institutions including UC Berkeley and UC Davis, with minors likely on the platform. Khan reported his findings through Lovable’s support channel. His ticket was closed without a response.
An earlier study in May 2025 found that 170 out of 1,645 sampled Lovable-created applications had issues allowing personal information to be accessed by anyone. Approximately 70% of Lovable apps had row-level security disabled entirely.
The structural problem
Lovable is not uniquely insecure. It is representatively insecure. The platform generates full-stack applications using React, Tailwind, and Supabase in response to natural language prompts, a process the industry calls vibe coding after Andrej Karpathy coined the term in February 2025. The approach lets anyone describe an application and have it built by an AI model without writing or reviewing code. Collins English Dictionary named it Word of the Year for 2025. Gartner forecasts that 60% of all new code will be AI-generated by the end of this year.
The security data across the entire category is consistent. Between 40 and 62% of AI-generated code contains security vulnerabilities, depending on the study. AI-written code produces flaws at 2.74 times the rate of human-written code, according to an analysis of 470 GitHub pull requests. A first-quarter 2026 assessment of more than 200 vibe-coded applications found that 91.5% contained at least one vulnerability traceable to AI hallucination. More than 60% exposed API keys or database credentials in public repositories. The vulnerability classes are the same across every major vibe coding platform: disabled row-level security, hardcoded secrets, missing webhook verification, injection flaws, and broken access controls.
Bolt.new ships with row-level security off by default. Cursor has had multiple CVEs patched, including a case-sensitivity bypass enabling persistent remote code execution. Researchers at Pillar Security demonstrated a “rules file backdoor” attack in which hackers inject hidden malicious instructions into configuration files used by Cursor and GitHub Copilot. A separate “Agent Commander” attack in March showed that prompt injection into AI coding agents could convert autonomous coding tools into remotely controlled malware delivery platforms. In January, the vibe-coded social network Moltbook was breached within three days of launch, exposing 1.5 million API authentication tokens and 35,000 email addresses through a misconfigured Supabase database with no row-level security.
The economic incentive problem
Security firms are raising money specifically to address the gap. Escape raised $18 million to replace manual penetration testing with AI agents that scan vibe-coded applications, citing over 2,000 high-impact vulnerabilities and hundreds of exposed secrets found in live production systems. Lovable itself partnered with Aikido to bring automated pentesting to its platform. But the fundamental incentive structure of the market works against security.
Lovable hit $4 million in annual recurring revenue in its first four weeks and $10 million in two months with a team of 15 people. It raised $200 million at a $1.8 billion valuation in July 2025 and $330 million at $6.6 billion in December, more than tripling its valuation in five months. Enterprise adoption of vibe coding grew 340% year over year. Non-technical user adoption surged 520%. Eighty-seven percent of Fortune 500 companies have adopted at least one vibe coding platform. The market rewards speed and accessibility. Security is a cost centre that slows both.
The result is a category in which the dominant platforms generate code that is insecure by default, the users generating that code lack the expertise to identify the vulnerabilities, and the platforms themselves have financial incentives to prioritise growth over remediation. Lovable’s handling of the March and April incidents illustrates the dynamic precisely: a bug bounty report was closed without escalation, a vulnerability affecting thousands of projects was patched for new users but not existing ones, and the public response cycled through denial, deflection, and a partial apology within a single day.
The regulatory gap
The EU AI Act’s high-risk obligations take effect on 2 August, requiring transparency, human oversight, and data governance for AI systems. California’s S.B. 53 and New York’s RAISE Act require frontier AI developers to publish safety frameworks and report incidents. But none of these regulations specifically address the security of code generated by AI models for end users, and the adoption data suggests the market is moving faster than regulators can respond. Financial services and healthcare, the two most regulated sectors, show the lowest vibe coding adoption rates at 34 and 28% respectively, which indicates that the market itself recognises the compliance gap even if regulations have not yet caught up.
As Trend Micro framed it: “The real risk of vibe coding isn’t AI writing insecure code. It’s humans shipping code they never had a chance to secure.” The 84% surge in App Store submissions driven by vibe coding tools suggests the volume of unreviewed code entering production is accelerating. Thirty-five CVEs were disclosed in March alone from AI-generated code, up from six in January, and Georgia Tech estimates the actual figure is five to ten times higher than what is detected.
Lovable is the fastest-growing software startup in history by several measures. It is also a company that closed a critical vulnerability report without reading it, left thousands of projects exposed for 48 days, and responded to public disclosure by denying a breach, blaming its documentation, blaming its bug bounty partner, and then apologising for the apology. The pattern is not unique to Lovable. It is the pattern of a category that has built extraordinary tools for creating software and almost nothing for securing it.
Click Here For The Original Source

