Vibe Coding, Startups, and the Illusion of Failure
I haven't written articles in a while. Perhaps because too much time went not into writing, but into experiments. But recently I heard a thought that stuck with me — and I wanted to unpack it out loud.
On YouTube, I stumbled upon someone I hadn't listened to before. I subscribed almost immediately. He was talking about how there are too many so-called vibe coders these days. People who endlessly "launch startups," burn tokens, money, and time, and end up with hundreds of projects that nobody needs and that never took off. According to him, there used to be some natural filter: companies simply couldn't physically produce that much software, so they supposedly only released the best.
And this is where I felt internal resistance.
Yes, producing software used to be expensive. Yes, experiments cost a lot. But does that mean mistakes simply "didn't exist"? Or that the selection was quality-based?
More likely the opposite: there weren't fewer mistakes — we just didn't see them. They didn't make it to production, didn't enter the public space, didn't become GitHub repositories with pretty READMEs. It wasn't a quality filter; it was an access filter.
Modern science and engineering have long said the opposite: the more experiments, the higher the chance of a quality result. Now the number of experiments is growing exponentially, and that's normal. It's not a bug in the system; it's its evolution.
Yes, most of these experiments don't "take off." But the question is — should they?
What I agree with is that most people today build "startups" not from the market, but from themselves. They solve their own pains, their own interests, their own fantasies. I'm like that too.
I also sometimes catch myself telling friends:
"Oh, I launched another startup over the weekend."
But if I'm honest — that's not quite accurate.
I have one main project I've been working on for a long time. I'm talking about VideaMind. It didn't appear over a weekend; it wasn't made just to check a box. I've applied to accelerators with it several times, rewritten it, rethought it, adjusted it. And now I build many pet projects around it — not as "yet another startups," but as infrastructure experiments.
And here, it seems to me, there's a substitution of concepts.
A pet project is not a startup.
A pet project is a form of thinking.
I build them the same way I build MVPs at my main job. There's no goal to "make money" there either. The market already exists. We improve processes: foreign trade, government workflow automation, document assistants, classification, code recognition, answering questions. These aren't startups. These are experiments within a clear boundary.
I treat my pet projects exactly the same way.
I don't need them for pitch decks. I need them for forming mental models.
In the last six months, I've learned more about AI than in the previous several years. And yes — largely thanks to that very vibe coding that irritates some people.
I learned to:
What many solve through something like OpenAI "out of the box," I try to solve through my own solutions. Not because it's more correct. But because it's important for me to understand the system from the inside.
I think the main mistake is measuring everything solely by the metric of startup success.
If an experiment gave you:
then it didn't fail.
It worked.
And perhaps, through dozens of such "unnecessary" pet projects, something will eventually emerge that the market actually needs. Not because you guessed right. But because you prepared yourself for that moment.
This article was created in hybrid human + AI format. I set the direction and theses, AI helped with the text, I edited and verified. Responsibility for the content is mine.