I checked my own app the night the Moltbook breach story broke. It was 11 p.m., I was in bed, and I had a feeling I get sometimes that I have learned to trust. The feeling that I had built something on top of a polite fiction.
Three hours later I was on the floor of my kitchen with a laptop and a cold mug of tea, staring at a Supabase dashboard that was, in the most literal sense possible, a public park. Anyone with the anon key could read every row in my users table. Anyone. The anon key was in my frontend JavaScript bundle, which meant it was already in the wild.
I had been the guy who wrote a whole security rules post two months earlier. And I had still missed it. That is the thing about this vibe coding security breach: it is not happening to the careless. It is happening to everyone.
Moltbook, in case you have not been following, was a social network for AI agents. Launched in November 2025, fully vibe-coded, the founder bragged on a podcast that he "hadn't touched a line of code." Wiz researchers scanned it in early February and, per Fanatical Futurist's coverage of the Moltbook vibe-coded security breach, found an open Supabase instance leaking 1.5 million authentication tokens, 35,000 email addresses, and the private DM history of thousands of agent accounts.
This is the story of the one configuration line that caused it. And why the same line is living, right now, in the majority of apps built by people like me. Including, for three hours that night, mine.
The Breach in 200 Words
Here is what happened, compressed.
Moltbook's founder built the platform in Cursor with Claude and Supabase over six weekends. The app functioned. Users signed up, agents talked to each other, a small community formed. The founder did a victory lap on Twitter.
On February 3rd, Wiz researchers ran a routine scan of vibe-coded apps that had gone viral in the previous quarter. Moltbook's Supabase instance answered queries from an unauthenticated client. Not because of a zero-day. Not because of a stolen credential. Because Row Level Security, the feature that controls who can read what in a Postgres-backed Supabase project, had never been turned on for the users, messages, or sessions tables.
The anon key, which is shipped to every browser that loads the site, was enough to pull the entire database. 1.5 million tokens. 35,000 emails. Private messages. Session cookies, some still valid.
Moltbook went offline within 48 hours. The founder issued an apology that began, I am not making this up, with "I trusted the AI."
That is the line on the tombstone.
The Configuration Pattern
Here is the exact pattern. I am showing it to you the way I found it in my own project that night, because I want you to see how innocent it looks.
-- What the AI generated (and what I shipped)
create table public.users (
id uuid primary key default gen_random_uuid(),
email text not null unique,
auth_token text,
created_at timestamp default now()
);
-- RLS was never enabled. The table is wide open to the anon key.
-- Any client with the public anon key can: select *, insert, update, delete.
-- This is the single most common vibe coding security breach pattern in 2026.That is the default when you run create table in a fresh Supabase project. RLS is off. The anon key has full access. The AI does not warn you, because the AI is writing code that works, and working code for a demo has no RLS.
Here is what it should look like.
-- The fix: enable RLS and write explicit policies
alter table public.users enable row level security;
create policy "users can read their own row"
on public.users for select
using (auth.uid() = id);
create policy "users can update their own row"
on public.users for update
using (auth.uid() = id);
-- No insert or delete policy means the anon key cannot do either.
-- Service role key (server-side only) still can.Four lines of SQL. That is the difference between Moltbook and not-Moltbook. Four lines.
I ran alter table ... enable row level security on every table in my project at 1:47 a.m. Then I rotated my anon key, because the old one had been in the wild for six weeks, and I had to assume someone had it.
Why the AI Defaulted to It
Here is the part that keeps me up.
The AI did not choose to omit RLS because it is lazy. It omitted RLS because every tutorial it was trained on omitted RLS. Go look at the top ten Supabase quickstart posts on dev.to. Go look at the Supabase YouTube starter videos from 2023 and 2024. Almost none of them enable RLS in the happy path. RLS shows up in a later "production hardening" post that nobody reads, because by the time you could read it, your demo is already working and you are on to the next feature.
Think of it like a cooking show. The chef on camera never mentions washing the chicken or checking the internal temperature, because it would slow the pacing down and ruin the shot. The pan sizzles, the garlic goes in, the plate is beautiful, the segment ends. You go home and make it yourself, and you give your dinner guests salmonella, because the show was never about food safety. It was about the beauty of the pan.
AI training data is a cooking show. The LLM learned to make the pan look gorgeous. It did not learn to wash the chicken.
This maps to a broader pattern the Stack Overflow blog on whether bugs and incidents are inevitable with AI coding agents has been tracking: models optimize for code that passes the reader's first glance. A tutorial that skips RLS compiles, runs, and gets upvotes. A tutorial that includes RLS is longer, harder, and gets fewer. Guess which one ends up overrepresented in the training set.
The AI is not wrong. The AI is average. That is the problem.
The 69 Vulnerabilities Study
If you think Moltbook is a one-off, read the DEV Community writeup of 69 vulnerabilities found across 15 vibe-coded apps. Gabriel Anhaia and his team pen-tested fifteen apps that had each gone viral on ProductHunt in the preceding quarter. Fifteen apps, sixty-nine findings, an average of 4.6 vulnerabilities per app.
Of those sixty-nine, thirty-five graduated into formal CVEs in March alone. Six of them were rated critical. The single most common pattern across the dataset, appearing in nine of the fifteen apps, was exactly what killed Moltbook: Supabase or Firebase with no row-level authorization, anon key leaking, full table access from the client.
Let me do the napkin math on what this implies. If roughly 60 percent of vibe-coded apps share the open-RLS pattern, and if there are perhaps 200,000 such apps live on the public internet today (a conservative estimate from crawling Vercel and Netlify subdomains), then there are roughly 120,000 applications right now with a readable user table. If each averages even a few thousand users, that is on the order of hundreds of millions of leaked records waiting for someone with a scanner and a weekend. Moltbook is not the disaster. Moltbook is the warning shot.
And here is the kicker. Nine of those fifteen tested apps had been online for more than six months. The vulnerabilities were not new. They had been sitting there, quiet, the way a termite colony sits inside a wall for years before the floor gives way. The owner does not notice until the day the couch falls through.
The 5-Minute Audit
Here is what you do, right now, before you finish this post. Open a second tab. I will wait.
Go to your Supabase dashboard. Click on the Table Editor. Look at every public table. For each one, check the RLS toggle in the top right. If it is off, that table is fully public to anyone holding your anon key, which is everyone who has ever loaded your site.
Then open your browser devtools on your production site. View source. Search for the string supabase. Your anon key is in there. It is supposed to be, that is how the client talks to the database. But that is exactly why RLS has to be on.
Next, open your Supabase SQL editor and run this query.
select schemaname, tablename, rowsecurity
from pg_tables
where schemaname = 'public';Any row where rowsecurity is false is a hole. Patch it now, not later. Enable RLS, write a policy that requires auth.uid() matches the owning user, test it from an incognito window with no session. If you can still read the data, your policy is wrong.
Finally, check your storage buckets. Supabase Storage has its own set of policies, separate from RLS on tables. AI tools forget this constantly. A bucket full of user uploads with public read is how private documents end up on Google search results.
The whole audit takes five minutes if you know what you are doing, twenty if you are learning. Compared to the cost of a breach, which is your users, your reputation, and possibly your company, it is the cheapest insurance you will ever buy.
If you want a deeper pass, I keep coming back to the full vibe coding security rules checklist. Eight categories, the things that actually matter. This post is one of the eight.
What to Demand from AI Tools by Default
The tooling has to change. Individual discipline is not going to save us at the population level, any more than asking every driver to check their brake fluid has ever prevented highway fatalities. You need guardrails built into the road.
Here is the short list of what I think every AI coding tool should do by default, not as an opt-in.
First, when the AI generates a create table statement against Supabase or any Postgres-backed BaaS, it should refuse to finish without also generating an enable row level security line and at least one policy. No exceptions. If the user does not want RLS, they can explicitly ask for a comment that says -- RLS intentionally disabled, service role only, and the commit should require a human-readable reason.
Second, AI-generated code that creates any public endpoint should come bundled with a smoke test that hits the endpoint from an unauthenticated client and asserts the expected status. This is the equivalent of a hospital requiring a surgeon to mark the correct leg before operating. It is not doubt in the surgeon. It is systems design against a failure mode that has already happened too many times.
Third, before any PR that touches database schema gets merged, there should be a security gate that is not the AI that wrote the PR. I have written about why you should block AI agents from merging their own PRs and I will say it louder here: the thing that caught my open RLS was not Claude. It was me, at midnight, in a bad mood, with a second pair of eyes. Every codebase needs the second pair.
Fourth, and this is the one I think the industry is going to resist the longest, AI tools should ship with a concept of a "production readiness baseline" that is enforced, not suggested. Like the building code on a new house. You cannot pour the foundation and hook up the electrical and pass inspection unless certain things are true. Today the inspector is asleep. Today the inspector is the founder who has not slept in thirty-six hours and just wants to ship. We need a real inspector and it needs to be the tool itself.
If you are still figuring out where you sit in all of this, the primer on what vibe coding actually is is a good place to start. The short version: vibe coding is not going away. It is the default way software is being built now. Which means its failure modes are about to become the default failure modes of the software industry.
I do not know what the next Moltbook will be. Nobody does. But I know someone, right now, is shipping an app with an open users table, and I know the AI that wrote it is going to give them a thumbs-up and a friendly recap of the features they added. And I know that an hour after that, somewhere else, a researcher will run a scan.
Go check your tables. Then come back and tell me what you found. I am curious how many of us got lucky on the same night.