Last weekend, I built four apps. Today, none of them are still running.
One was a subscription tracker. Another was a Chrome extension to reformat data from a vendor portal my team dislikes. The third was a habit tracker using a scoring system I had been sketching for months. The last was a landing page with a waitlist for a product idea I had put off since November.
I spent about six hours over the weekend. I only typed twenty lines of code by hand. By Monday morning, none of the apps had survived.
The subscription tracker stopped working when my bank changed its CSV export format. The Chrome extension failed as soon as the vendor updated their website. The habit tracker was fine until I realized I needed offline support and data sync, which the AI hadn't planned for. The landing page got eleven emails before I realized I didn't even know what the product was.
This is what building software with AI looks like in 2026. The code costs nothing, but making real software is still expensive.
The Barrier Collapsed. The Hard Part Didn't Move.
Let me explain what has changed. Two years ago, turning an idea into a working prototype meant you needed strong programming skills or a big budget. You had to know a framework, set up deployment, and be ready to debug confusing errors late at night. Or you had to hire someone who could.
That barrier is gone. Tools like Claude Code, Cursor, Bolt, and Lovable let anyone with a laptop and twenty dollars a month create a working app from just a text description. This really is impressive. It's not just hype. I use these tools daily, and they're better than ever. Understanding what to build, knowing who it is for, designing systems that survive real-world friction, and getting anyone to care that it exists. The barrier to entry for generating code has collapsed. The barrier to shipping software that matters has not moved an inch.
From SaaS to Scratchpads
There's something interesting happening with what people build using these tools. Most of it isn't meant to last.
A freelancer quickly makes a custom invoicing tool for their own workflow. A product manager builds a dashboard to show a specific dataset for a quarterly review. A founder tries out three landing pages to test messaging before picking one. A teacher creates a quiz app for a single lesson plan.
This is not SaaS. This is software as a disposable. It's like a digital napkin sketch, something you make to solve a problem, use a couple of times, and then throw away. You do not need the tool to persist, because recreating it costs nearly nothing. This is a return to how spreadsheets were originally used: not as permanent databases, but as scratchpads for reasoning through problems.
There's nothing wrong with this. Personal, throwaway software is real and useful. The problem comes when people confuse it with building a real product.
The Five-Minute App and the Five-Month Product
Here is a pattern I keep seeing. Someone builds an app in a weekend, posts a demo on Twitter, and the replies explode. "This is incredible." "The future of software." "I just built my SaaS in two hours."
Then silence. No follow-up posts, no updates about user growth or revenue. The five-minute app meets the real world, and that's a test no demo video can prepare you for: handling 40 different bank export formats, not just the one you tested with.
The Chrome extension needs to withstand monthly DOM changes on a website that doesn't care about your scraping logic.
The habit tracker needs offline support, data sync, conflict resolution, and a migration strategy for when you inevitably change the schema. The landing page needs a product behind it.
These are not just code problems; they are software problems. And these are exactly the kinds of issues AI struggles with, because they need context, judgment, and an understanding of real users messy lives.
Writing code is the easy part it always has been. The hard part is deciding which bank formats to support first, figuring out how much to cache data when users have poor connections, choosing between different data-consistency models for billing features, and knowing when to say no to a feature that would double your workload.
AI can write code for any of these choices. But it can't actually make the decisions.
The Distribution Illusion
With the barrier to entry gone, noise levels have hit an all-time high. My feeds are full of "AI entrepreneurs" claiming five-figure monthly recurring revenue on apps they built in an afternoon. Some of these stories are real. Most of them are not what they seem.
If you see someone with no audience and no clear edge claiming $10,000 in monthly revenue from a weekend project, it is usually a marketing story, not a technical one. They succeed because they know how to reach people, not just because they use AI.
This is what the hype often misses. AI has taken away the edge that engineering skills used to give. When anyone can build a complex feature in minutes, coding isn't a big advantage anymore. Now, what matters is taste, timing, and really knowing your users.
You can build a product in a weekend, but it doesn't matter if it's the wrong thing or if no one is paying attention.
What Engineers Actually Do Now
There is a version of this story that some people say means engineers are obsolete. You've seen it on Hacker News, Reddit, and in excited LinkedIn posts from people who've never shipped real software. If an engineer is shifting away from the "how" of syntax and toward the "what" and "why" of systems. Real engineering has always been about abstractions and architecture, knowing how to structure a system that lasts, understanding why a specific rate-limiting strategy is necessary, managing distributed state, and knowing exactly where things will break under load.
AI can hide complexity, but engineers are still needed to manage it. The tools are different now, but careful engineering matters more than ever, because mistakes in architecture become bigger problems when you can generate code so quickly.
I review AI-generated code just like I would a teammate's pull request. I check the logic, question the assumptions, and look for edge cases the AI couldn't predict because it doesn't know my users, my systems, or my business needs. Just because the code compiles doesn't mean it's right.
Who Actually Wins
Not everyone loses in this era of free code. Some people are actually winning big.
Domain experts with everyday problems are winning. The accountant who builds a custom tool, the operations manager who automates a manual task, the researcher who sets up a data pipeline, these people aren't building SaaS businesses. They're just making their work easier, and AI tools are great for that.
Internal teams making quick, disposable tools are also winning. These are scripts and apps that just need to work right away, not look perfect. Hackathon projects that solve real problems for small teams. SaaS vendors never focused on these because the audience was too small.
Engineers who use AI as a tool, not a replacement, are winning too. Developers use Claude Code to skip boilerplate, generate tests, and explore new codebases faster, not to avoid thinking, but to boost it. They're shipping faster, not because AI writes their software, but because it handles the slow parts.
People who already have an audience or community are also winning. If you can quickly prototype and launch features, that's a real advantage. Code was never your main problem; reaching people was. Now you can move as fast as your ideas, not just as fast as you can type.
The Twenty-Dollar Test
Here is how I tell if something built with AI is real software or just a scratchpad. I call it the twenty-dollar test.
Would a stranger pay twenty dollars a month for this? Not a friend or a follower, but someone who doesn't know you, found your tool online, and is looking for a solution to their problem.
If the answer is yes, you might have real software. If not, it is just a scratchpad. Both are fine, but mixing them up is where people run into trouble.
Most of what is built with AI apps doesn't pass the twenty-dollar test. They solve the builder's problem, not the market's. They work in demos, not in real-world situations. They launch with excitement and fade away quietly. Twenty-dollar tests are built by people who spend more time understanding the problem than writing code. People who talked to users before opening their terminal. People who chose a boring, well-understood problem over a novel, exciting one. People who treated the AI-generated code as a first draft, not a finished product.
The Era of Personal Software
We are not entering a golden age of SaaS. We are entering the era of personal software.
With twenty dollars, a few spare hours, and some patience, almost anyone can launch a working app. The gap between an idea and a prototype is smaller than ever. That's genuinely exciting, and I don't want to downplay it.
But the gap between a working prototype and a product people actually use, pay for, and return to is still just as wide as before. It might even be wider now, since so many low-effort apps make it harder to stand out.
The tools are different, but the basics haven't changed. You still need to understand your users, design systems that work in the real world, build distribution, and make tough choices with limited information. That's always been the real work.
AI made code free, but it didn't make software free.
The barrier to entry is gone, but judgment, taste, and responsibility still matter.
If you're building something this weekend with Claude Code, Cursor, or any new tool, here's my honest advice: spend your first hour talking to the person who will use it. Don't start by writing code or setting up a project. Start by talking, understanding, and listening.
The code will sort itself out. The hard part never does.