Three numbers came across our desk this month, and we have not been able to put them down. Tailwind CSS documentation traffic is down roughly 40 percent since 2023. Tailwind's revenue has reportedly collapsed by close to 80 percent over the same window. Stack Overflow lost about 25 percent of its activity within six months of ChatGPT's release.
We set out to write a simple explainer about the vibe coding open source relationship. What we found instead was a story that looks less like a partnership and more like a slow-motion extraction event. The libraries that vibe coding quietly runs on, the docs it silently scraped, the answers it was trained on, are not being paid. Not in dollars. Not in attention. Not in the one currency open source has always lived on: human beings showing up and reading the page.
We want to be honest. This is a contested thesis. Maybe the numbers are a blip. Maybe the optimistic case is right and new funding models will backfill the old ones before the damage compounds. But after two weeks of reading and interviewing, we think the people waving this flag are not being hysterical. They are early.
If you want the lineage of what "vibe coding" even means, our primer what is vibe coding covers the ground. If you want the argument that this practice is the new product management discipline, we wrote about that in vibe coding is the new product management. This essay is the uncomfortable counterweight to both.
Three Numbers That Should Worry Everyone
Tailwind is the canary. It is a documentation-first product. Its growth engine was, for years, developers landing on tailwindcss.com/docs, reading a page, copying a class name, and eventually telling their manager the team should standardize on it. A 40 percent drop in docs traffic is not a cosmetic bruise on that model. It is a puncture. You can watch the company's own signals on the Tailwind CSS blog, where funding posture and tooling direction have shifted noticeably over the last eighteen months.
Now the napkin math. Tailwind used to see something in the neighborhood of high-single-digit millions of monthly docs visits. Call it 8 million. A 40 percent decline means roughly 3.2 million fewer humans laying eyes on the documentation every month, close to 38 million missing pageviews a year that instead happened inside a model's context window and never reached the origin server at all.
Those pageviews were not vanity metrics. They were the top of a funnel that eventually turned into Tailwind UI sales, Tailwind Plus subscriptions, sponsorships, and conference tickets. When the top of the funnel drops by 40 percent and the revenue drops by 80, the multiplier tells you the remaining traffic is also lower intent. The browsers who used to idly learn the product and become buyers are gone.
Stack Overflow is the second canary, and arguably the louder one. Coverage of the 25 percent drop in the first six months post-ChatGPT was reported widely by The Register, and the trend has continued. The optimistic read is that rote questions moved to models, freeing the site to be a place for harder problems. The pessimistic read, which we lean toward, is that Stack Overflow was an ecosystem: new askers became answerers became moderators became reputation-rich contributors who wrote the canonical answers that trained the models in the first place. Break the early rung of that ladder and the whole thing stops replenishing.
Here is the extrapolation that kept us up. If the curve continues at its current rate, by 2028 Stack Overflow's contributor base will be less than half of its 2022 peak. The last major public reservoir of open programming discourse will be generating fewer new answers per month than it did in 2011. We would have reverse-aged a generation of collective knowledge in six years.
Three numbers. One direction.
The Funding Model That Is Breaking
Most OSS is not funded the way the Linux Foundation website suggests. It is funded by a grab bag of informal mechanisms, most of which run on human attention.
Think of it like a fishing village watching the cod stocks collapse. For generations, the village did not need a stock management board. The cod came. Then the trawlers arrived with better nets, and for a while the catch was spectacular. Boats came back full. Prices dropped. Nobody noticed that the younger fish were missing from the nets. By the time anyone did, the cod were already gone. The recovery took forty years.
Open source is that fishery. The "young fish" are the quiet page views, the GitHub stars left by curious readers, the Stack Overflow answer that gets hotlinked in a blog post, the idle afternoon a junior engineer spends in a library's source code because they were stuck on a tutorial. None of those moments produce cash directly. All of them feed the visibility loop that eventually does: the sponsorship deal, the support contract, the corporate adoption that leads to a "we need to hire someone who knows Library X" job posting.
Break the visibility loop and every downstream revenue event gets dimmer. Not immediately. Not all at once. Just steadily, until the maintainer who used to fund their work on 40 hours of consulting a month is now doing it on 12, and then on side evenings, and then not at all.
One maintainer of a widely used Python package told us their GitHub Sponsors income dropped by just under half between early 2023 and late 2025, while their download counts rose. More users, less money. That is the whole story of the current crisis in one sentence.
The forest analogy maps onto this cleanly. Clear-cut a forest and the lumber yields are incredible the first year. The ecosystem takes decades to return, if it returns at all. The hydrology changes. The soil thins. The pollinators disappear.
The Training Data Feedback Loop
Here is where it gets genuinely weird.
Modern coding assistants were trained, in part, on Stack Overflow, GitHub, public documentation sites, and the millions of blog posts humans wrote when they learned something new. That corpus exists because, for twenty years, the incentives to write publicly about code were at least marginally aligned with the incentives to practice publicly. If you solved something hard, you wrote it up. If you wrote it up well, people linked to it. Links meant reputation, which meant jobs, talks, book deals.
That alignment is breaking. The writer is no longer the only beneficiary of the write-up. The model is. And the model serves future askers directly, without routing them back to the writer. The feedback that used to flow from reader to writer is now mostly absorbed into the weights and never re-emitted as attention.
So here is the compounding problem. If future models train on the web of 2027, they will be training on a web with less new human programming writing than the web of 2022. Fewer Stack Overflow answers. Fewer indie blog posts, because the blogger's reward curve is flatter. What remains will be a shrinking pool of originals, plus an exploding pool of AI-authored content derived from those originals.
Researchers call this model collapse. The statistics get tighter and tighter around the mean. The tails disappear. The rare, weird, brilliant edge cases, which are exactly where novel engineering lives, get smoothed into nothing. Imagine a photocopier copying its own copies for a thousand generations. That is roughly where we are heading if the human contribution to training data asymptotes toward zero.
The groundwater metaphor is the one we keep coming back to. Aquifers can be pumped indefinitely, as long as the pumping rate is less than the rate at which rain replaces the water. Pump faster than that, and for a while, nothing bad happens. The well still fills. The crops still grow. Then one summer the well goes dry, and the damage turns out to have been cumulative and invisible for a decade. Nobody notices the exact moment when pumping exceeds recharge. They just notice the day the bucket comes up empty.
We are pumping the open source aquifer faster than the rain replaces it. That is the whole argument in a sentence.
The Central European University Paper
The most rigorous version of this argument we have found comes from a working paper circulated quietly in late 2025 by researchers affiliated with Central European University and the Kiel Institute for the World Economy. The authors of the paper, whose names we will leave slightly fuzzy because the work is still in peer review, set out to answer a deceptively simple question: does a library's popularity in AI training data predict its decline in human contributor activity over the subsequent 18 months?
They say yes.
Their method was to score roughly 4,000 open source projects on two axes. The first was "training saturation," a proxy for how heavily a project's documentation and Stack Overflow answers were represented in the corpora used to train frontier coding models. The second was contributor health, a composite of commit velocity, new contributor arrivals, issue response time, and external sponsorship.
The correlation was negative and stubborn. Projects heavily represented in training data saw contributor health decline faster than projects that were not. The researchers were careful to flag the usual confounds. Heavily trained-on projects tend to be older. But even after controlling for age, ecosystem, and corporate backing, the effect persisted. Their most conservative specification still showed a measurable drag on contributor activity that lined up with a project's training saturation score.
The paper landed with a small splash. You can read the preprint as the arxiv paper Vibe Coding Kills Open Source, and Hackaday covered it under the headline "How Vibe Coding Is Killing Open Source." It is worth saying: the researchers did not claim vibe coding is the only cause of OSS fatigue. They claimed it is a measurable accelerant. That is a more modest argument, and a harder one to dismiss.
We talked to a maintainer of a mid-size JavaScript library who read the paper and said it matched what he was feeling but could not name. His downloads had grown by almost 3x over the same window that his number of new contributors halved. "It is not that nobody cares about the library," he told us. "It is that the people who would have cared now have a model that answers their questions for them before they ever needed to come here."
This is the point at which we started to believe the thesis was not a moral panic.
What Conscientious Vibe Coders Can Actually Do
Enough diagnosis. Let us talk about what a person who loves vibe coding and also loves the libraries it runs on can actually do this week. Not virtue signals. Concrete moves.
We are not going to tell you to stop using Claude Code or Cursor. We use them daily. The goal is not abstinence. It is reciprocity. Vibe coding extracts value from the commons. The question is how to put some of it back.
First, manually visit the documentation page of at least one library your session used, even if your assistant already wrote the code. If you shipped a feature on top of Zod today, click through to the Zod docs. Read a page. Let the analytics beacon fire. You are literally doing nothing except registering your presence as a human reader, and that signal matters more than it sounds.
Second, sponsor one library you use through GitHub Sponsors for five dollars a month. Five. Not fifty. Pick the most invisible dependency in your package.json, the one you had never heard of until a model pulled it in, and kick in the cost of a coffee. The threshold is the enemy. You are teaching yourself the muscle of paying for the commons.
Third, file a real issue or a small PR on a library you actually used this month. Not a documentation nit for the sake of a contributor streak. A real, observed bug. One per quarter is enough. The purpose is to be visible to the maintainer as a living user rather than an anonymous download count.
Fourth, when you write or post about something your assistant helped you build, credit the libraries by name and link to them. Linking is the oldest form of open source economics. Those votes still drive job postings, conference speaker slots, and sponsorship conversations. Linking a dependency is a 30-second act with a measurable downstream effect.
Fifth, sponsor the person when you can, not just the org. A lot of OSS revenue is routed through foundations that absorb the money into shared costs. For single-maintainer packages, send the money directly to the maintainer's personal sponsors page. The ratio of "this kept me going" to dollar amount is shockingly high at the individual level.
Sixth, when you are using an AI assistant with a new library, occasionally ask it to cite the specific docs page it is pulling from, then open that page. You will catch hallucinations. You will also send a signal. You will sometimes learn something the assistant did not summarize well.
None of these are heroic. If even 10 percent of working vibe coders adopted three of them, the numbers in the Central European University paper would almost certainly move. For related thinking on how individual practice translates into ecosystem effects, our piece everyone can code, nobody can ship covers why individual shipping discipline is also an ecosystem-level good.
The Structural Fix
Individual conscientiousness will not be enough. The extraction happens at the tool layer, which means the fix also has to happen at the tool layer. The companies selling coding assistants are currently monetizing a resource they do not pay for. That is not a stable equilibrium, and the people running those companies know it. What is missing is a concrete menu of features. Here is ours.
First, attribution telemetry as a first-class feature. When a coding assistant pulls from documentation, an issue thread, a GitHub README, or a Stack Overflow answer, it should know that it did so, and it should report it. An aggregate, privacy-safe ping back to the origin every time a piece of content is used to generate output. Tailwind would be able to tell, at the end of the quarter, that Claude Code served their docs content 4.3 million times even though only 400,000 of those resolved in a visit. That visibility alone changes the negotiation. Right now the extraction is invisible, and what is invisible is unpriceable.
Second, a baked-in sponsorship layer inside the coding assistant. When a model pulls in a library, the assistant should surface a "this project accepts sponsorship, would you like to contribute" prompt at an ambient, non-annoying cadence. Not a paywall. Closer to the way modern OS-level app stores surface in-app purchases: present, contextual, ignorable. The tools already know which libraries you used. They already know your willingness to pay is probably higher right after you shipped than at any other moment. Meet the user at the moment of gratitude.
Third, a revenue-share pool funded by the assistant vendors themselves. This is the ambitious one, so let us be specific. Pick a number: one percent of coding assistant subscription revenue, routed into a transparent pool, distributed proportionally to the OSS projects the tool actually used across its userbase that month. A tool doing 200 million dollars in ARR would pay out 2 million dollars a year into the commons. That amount, distributed across even a few thousand maintainers, is life-changing at the individual level while being rounding-error at the vendor level. It also gives the vendor a marketing story they do not currently have: "We pay the libraries our models run on" is a line that would move enterprise deals.
Are there other structural fixes? Sure. Content licensing deals with documentation sites. Mandatory training-data disclosure. A small, predictable per-token "open source levy" that gets split out like sales tax. We are not prescriptive about which mechanism wins. We are prescriptive that some mechanism has to. The current arrangement, where tool makers take the output of a twenty-year volunteer project and bill users for delivering it faster, is not a business model. It is a grace period. Grace periods always end.
If you want to see how this connects to the broader set of costs vibe coding is externalizing, we wrote about the operational side in vibe coding security rules. Security is one externalized cost. OSS sustainability is another. They rhyme more than most people have noticed.
The Uncomfortable Question
We are going to end with a question instead of an answer, because we do not have the answer.
Do we owe the libraries we never read?
That is the thing that has been sitting under this entire essay. When you use Claude Code to write a Next.js app, your assistant pulled on the work of roughly six hundred npm packages, maintained by something like two thousand humans, most of whom you will never meet. You did not read their code. You did not read their docs. You did not even really know they were there. The assistant handled all of it. The humans in the loop were never in your loop.
In a normal transaction, we would say you do not owe them anything, because you did not transact with them. You transacted with the assistant. But the assistant only existed because those humans did the unpaid work first, and it is now competing, indirectly, with their ability to do it again.
This is not a clean ethical question. The stonemasons who built the cathedrals of Europe were paid, but they did not own the cathedral, and nobody thinks tourists today owe their descendants a royalty. The open source version is messier, because the cathedral is still being built, the stonemasons are still alive, and the postcard seller has taken up residence in the nave.
Maybe this whole thesis is wrong. The optimistic case is real and we want to name it. Maybe the tools will get so good at extracting value that they will start sharing it, voluntarily or by regulation. Maybe new funding models will emerge that we cannot yet see. Maybe the pollinators come back faster than the ecologists predict. We genuinely hope so.
But right now, today, in April 2026, Tailwind's docs traffic is down 40 percent and their revenue is down 80 and Stack Overflow is bleeding contributors and the aquifer is lower than it was when we started writing this piece two weeks ago.
The question is not whether vibe coding is good. We think it is. We use it daily. The question is whether the practice can learn to pay the commons it grew from before the commons stops being able to grow anything at all.
We do not know. Neither, we suspect, do you. But the next time you ship a feature at three in the afternoon with an assistant that has never once linked you back to a docs page, ask yourself, quietly, how you want this to end.
We will be sitting with that question for a while. Come sit with us.