Lovable exposes user data after vibe-coding flaw
Lovable, a Swedish vibe-coding startup, left sensitive user data accessible after a platform authorization flaw and unclear defaults. Researchers and a bug hunter showed that free accounts could read other users' project source code, AI chat histories, and customer data because apps generated or deployed via Lovable lacked ownership validation and proper Supabase row-level security. The company denied a breach, blamed unclear documentation, and disabled public visibility for new enterprise projects, but its initial triage and communication drew heavy criticism. The incident highlights systemic security debt in AI-assisted low-code platforms and the need for secure defaults, better platform-level enforcement, and faster vulnerability triage.
What happened
Lovable, the Swedish vibe-coding startup, faced public scrutiny after a researcher demonstrated that a simple free account could access other users' project source code, AI chat histories, and customer data. The bug was reported 48 days before the disclosure, and Lovable has said, "To be clear: We did not suffer a data breach," while admitting its documentation about what "public" means was unclear. Independent reports and security researchers traced the issue to apps deployed with Supabase where missing ownership validation and misconfigured authentication allowed widespread data exposure, later cataloged as CVE-2025-48757.
Technical details
The root causes are classic platform-level authorization failures combined with AI-generated code that omits security controls. Key technical takeaways:
- •Exposed items included source code, AI chat histories containing PII, database credentials, and app user records. Screenshots and researcher reproductions indicate access to emails, names, dates of birth, and enterprise user data.
- •The vulnerability class maps to a BOLA-style failure: missing or incorrect ownership checks on API endpoints and a malformed authentication function that mistakenly allowed unauthenticated access.
- •Platform dependencies matter: apps using Supabase backends lacked enforced row-level security, shifting responsibility to end users who often deploy AI-generated code without security expertise.
Platform response and triage
Lovable initially marked the report as a duplicate, referenced unclear documentation, and later disabled the ability for new enterprise projects to be set public since May 25, 2025. The company also announced a partnership with security firm Aikido for penetration testing, and the incident exposed friction in third-party bug-bounty triage via HackerOne where the submission was reportedly labelled duplicate and left open.
Context and significance
This incident is a concrete example of the broader risk posed by AI-assisted low-code and vibe-coding platforms. Several trends converge here: AI systems generate functional code but not necessarily secure code; platform defaults and templates can bake in insecure patterns; and novice developers lack the expertise to spot missing authorization checks. The problem scales because these platforms host many apps for enterprise customers; reports name users at major firms, and SC Media identified a hosted app with over 100,000 views and more than 18,000 users potentially affected.
Why it matters for practitioners
Secure defaults and platform-level enforcement are non-negotiable when AI writes deployment-ready code. Relying on developer diligence is insufficient when generated artifacts routinely miss security policies. For security teams, this raises the need to scan not only source repositories but also generated configs, third-party backend setups, and runtime authorization checks.
What to watch
Monitor remediation actions: full disclosure of affected projects, patch timelines for CVE-2025-48757, updates to Lovable's publishing pipeline to enforce Supabase RLS and ownership checks, and changes in bug-bounty triage practices. The incident will likely accelerate demand for secure-by-default tooling in vibe-coding platforms and for vendor SLAs that transfer liability for insecure generated defaults.
Scoring Rationale
This is a notable security incident affecting a widely used AI-assisted development platform and enterprise customers, illustrating systemic risks in vibe-coding. It is not a frontier-model or sector-defining event, but it should change how teams treat platform defaults and triage processes.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

