Are we in a software bubble?
There is something that has been bugging me for the past few months. Some kind of feeling that what I am hearing and reading is not what is truly happening. Yes, this is yet another blog post about (generative) AI. Hopefully one that contains something novel.
As I said: I feel like there is something brewing - something we are not talking about. This is not about founders being fraudulent or VC firms gaslighting the public, although I suspect there is some amount of that going on too. There is a lot of talk about the “AI bubble” as of late. Anecdotally, it feels like a majority of people in tech agree/suspect that there is one. Even a substantial amount of folks who love LLMs and have close to completely stopped writing code by hand will admit that “something might pop” in the near future.
What I am wondering, however, is whether that is missing the mark. What if we’re not in an AI bubble - but a software bubble?
The great promises of AI
LLMs and the tools built on top of them are an obvious breakthrough in digital technology. Their ability to “understand” text is unparalleled, and using interactive dialogue with the end-user to research and understand topics is a great completement to reading the internet through Google. Especially since the years leading up to the release of ChatGPT saw an absolutely horrendous enshittification of what used to be a remarkable search engine.
The market hype responded accordingly. Nvidia, a company that literally creates one single kind of hardware, rose to become the most valuable company in the world - and AI-native companies like Lovable are valued at ridiculous sums after only a single year of existence.
The writing on the wall is clear: at some point in the near future, LLMs/AI will create more value for society than we can possibly imagine. Or, that’s what the investments tell us anyway.
The AI sceptics
The sceptics will tell you that all of this is a bubble:
- LLMs are not as good as they appear. They just feel like it because they are generating text, which creates an “illusion of intelligence”.
- LLMs, while they can surely generate code, cannot generate good enough code at a large scale to be truly valuable.
- LLMs are a crutch that let people avoid learning.
You have heard these takes before. Public figures bearish on AI think that the LLMs will fail to live up to the hype, and that the markets will crash. This is nothing new.
But that is not what is bugging me. What I am starting to wonder is this: What if there is nothing wrong with the capabilities of the LLMs. What if we are simply unable to build anything useful with them?
Let me explain what I mean.
The trajectory
Think of the significant software you use today. Then think about all the software you used 10 years ago. How much has changed? For me, not a lot. For work, I use Slack, Google Drive/Docs, Jetbrains IDEs for programming and Google Chrome for web browsing. I use Spotify for listening to music, Netflix for streaming movies and TV shows. On my smartphone, I use Facebook Messenger, WhatsApp, Twitter and Youtube.
In 2016, literally all of that was the same. The only new significant piece of software that has emerged in the past decade are the LLM tools.
Now compare that to the 10 years prior, between 2006 and 2016. In 2006, most of the apps listed either did not exist or were in their infancy. Smartphones had not yet hit the mainstream, there were no app stores. Google had not yet released a web browser.
TikTok is an obvious omission from my list (since I don’t use it) that actually was released in the past 10 years. But with the evidence that is connecting social media like TikTok with severe mental health problems in kids, and with the US congress even going so far as considering it a serious threat to national security, can we really call it useful?
If we look at the last 20 years, there seems to be an extremely visible plateau in the production of useful software. It appears that all our ideas for creating a better world using computers have stagnated. Famous software companies have either gone down the path of Microsoft and their laughably bad start with Windows 11, made extremely questionable choices like Apple with liquid glass, or they have done like Spotify, Netflix or Slack and arguably not changed a thing in 10 years 1.
Now, you might think that I am being unfair. During the first of those two decades, an entirely new computer format (mobile smartphones) hit the market, and massive online ecosystems such as social media platforms were brought to life. It is only natural that there was significantly more software being built between 2006 and 2016 than the decade after that.
Also - LLMs and AI, you could say, is one of those new platforms. So just you wait. In 2036, you will have replaced all your software again.
But… does it look like we are headed that way?
AI for building new user experiences
Let us compare AI to the expansion of the internet or the release of the smartphone. The argument, then, would be that AI as a technology will enable entirely new ways to interact with computers, which will open the door to powerful, new ways to create value and improve the lives of people.
ChatGPT has been out for over three years now - and several of its open and closed competitors were not far behind. Have we seen that sweeping revolution in computer interactions spring to life?
In comparison, the iPhone was released in 2008 - and the App Store (including the tools to build your own apps) one year after that. In 2009, you had the very first opportunity to install custom apps on your phone - with apps like Spotify being available within weeks. By 2012, it felt like everyday computer interactions had radically shifted to be a mobile experience. Social media had become something you do on your smartphone, you could send money to your friends and family on the go, and various chat apps had completely replaced SMS.
With the exception of the actual big chatbot companies (OpenAI, Google etc) - where is all the amazing LLM powered functionality? Virtually all companies I see have tried - and for the most part the users all despise their attempts. Microsoft is being dragged in various online spaces for their attempts to squeeze Copilot into everything. Even famous AI-influencers (in the programming space) bemoan the use of AI in various products:
Every time I see an AI button in a UI somewhere I cringe and ignore it. Simultaneously I’m going nuts with my agents. Why is consumer AI so shit? https://t.co/6tVxCFCabR
— Armin Ronacher ⇌ (@mitsuhiko) January 11, 2026
I concede that the AI platforms themselves have successfully built useful products on top of these technologies. But so far, it does not appear to scale to the entire industry. Not in the ways the smartphone or the internet did, unless you are a “prompt your way to a website”-company like Lovable or Replit.
AI as a productivity tool
Another way people expect AI to revolutionize work is as a productivity tool. Especially as a means of generating code.
Sam Altman began his stardom as the face of AI by declaring his intent for AI to cure cancer or make novel discoveries in the field of physics. Despite these ambitious ideas, it looks like most frontier model companies spend the majority of their time shipping products dedicated to “agentic programming”. Tools like Claude Code have become so popular that even billioaire CEOs with business degrees seem to play around with them:
Watching Claude Code mass-murder backend tasks in seconds then spend 45 minutes misaligning a button...
— Sebastian Siemiatkowski (@klarnaseb) January 24, 2026
... has given me more insight into the frontend vs backend engineer wars than a decade in tech. I used to think frontend engineers were being dramatic...
😂😂😂
Last year was a bit of a watershed moment for LLM-assisted programming. With the release of Opus 4.5 and ChatGPT 5.2, a serious amount of independently minded software influencers changed their mind from “AI cannot write good production code” to “actually AI writes most of my code now”.
To me it looks like the tide is turning. Maybe this is not just an astroturfed fad that will pass in a year or two. Maybe most people will truly write code using “Agents” (or whatever comes next) in the near future. Heck, what if the models get good enough to simply help all of us write just as good code as we would write by hand - but at a significantly higher pace. If that happens, then the AI hypesters will have won, right?
But here is where I go back to where I started this article. Look at the direction we have been headed in the past decade. Will accelerating our productivity in said direction… actually lead to a lot of valuable software?
Even the most crazed, bullish AI influencers online are all about how “AI can write as good code as senior developers, but faster”. I rarely hear them talk about, or expect, that LLMs would transcend human talent and surpass us when it comes to software quality. Given that LLMs are trained on all of our existing code, making it become significantly better seems like a tricky problem to solve - if it is even solvable with the existing model architectures.
So if the wildest dreams of Dario Amodei come true, and we boost our productivity writing code tenfold, are we certain we won’t just spew out even more irrelevant SaaS garbage? Even more blur animations in our Finder windows that cause even harder frame rate drops? Even more addictive, attention-hacking software that ruin the brains of our kids and make them suicidal?
If AI is the productivity powerhouse that its proponents claim - are we actually in a place where we know what we would do with it? Or has software development seriously stagnated in the past decade? Maybe Casey Muratori is right when he says that it is ironic that we managed to get an AI that replicates our programming at a time when software development standards are at an absolute rock bottom.
Learnings from the internet and social media
While it is only tangential to my main point, I think it is fair to also compare the explosive rise of AI platforms to that of social media. There is something to be said about how the past two decades have been shaped by social media platforms such as Instagram, Twitter, TikTok, Snapschat and Youtube.
The companies behind all these apps have made an insane amount of money, all carried by the underlying micro-targeted advertising business model. The market value created by these companies is insane, as the software enables thousands of companies to use Google, Youtube, Facebook/Instagram etc to reach the perfect target audience for their products.
But there is an invisible tax that has been paid by the public, too. In the past decade, countless pieces of evidence has surfaced of the drawbacks of the algorithmic social media feeds - from whistleblowers inside the companies to research linking social media use to lower life satisfaction and erosion of trust in society.
As we speed-run our way to the AI dream world as imagined by the likes of Marc Andreessen, do we believe that the software we build today will help us reverse these trends? Or are we falling ever faster into a lonelier landscape filled with anime AI companions, robot-written blog posts and an ocean of auto-generated music on Spotify?
As I was typing this blog post, OpenAI literally announced their plan to insert ads into ChatGPT. Time seems to be a flat circle, and the internet still seems to have only a single kind of business model.

So why a bubble?
Let’s tie it all back to the title and what I mean by a software bubble. People talk about an “AI bubble” - telling us that the models cannot possibly become as good as the hypesters tell us they will be. But to me that is missing the point. The investments and valuations are not based on how high the next generation models will reach on various benchmarks - but rather what we would be able build using them. And from what I can tell, most of the proposed usage comes down to software; either by embedding it into our products or by letting it build our products for us. Both of those are based on the assumption that software product development is in a good place, and that creating more in the same trajectory will generate some kind of human prosperity. I… am not sure I feel like that is the case.
Maybe there is something else here - some elephant in the room not talked about on Twitter. Maybe there are some obvious ways in which generative AI can be incorporated into weapons manufacturing and that all the investments will be justified as soon as they get a contract worth 5% of the yearly Pentagon budget.
If not - where will it all lead? What is the end game?
Conclusion
This was a mixed bag of what obviously looks like AI pessimism. I am not a doomer in the sense that I think AI will become super-intelligent and erase us from the planet. It feels more likely that it will become the next iteration of social media: it will throw the likes of Lovable to fantasy-sized IPOs, and at the same time risk making our children less educated and more miserable.
I am also filled with hope, however. Hope that this time, maybe it is more obvious to us what is happening. Maybe we will get tired of Suno-produced music and Clippy 2.0 early enough to reject it. From the way people react to all the new AI-based features, maybe there is hope that people start chasing truly valuable technology instead?
Peter Thiel once said that “We wanted flying cars, instead we got 140 characters”. This was a condensed version of his criticism that in the past half century, we have made laughably little progress in the “world of atoms”. Instead, everything has happened in the “world of bits”. I honestly could not agree more.
Maybe, if we are in a software bubble, and it bursts - this could actually be great for software development as a field. Come to think of it, in the aftermath of the dot-com bubble, the broken remains of the software industry managed to create a tremendous amount of world altering products: Wikipedia, Facebook, Youtube, Gmail, Git, Skype, Google Maps and World of Warcraft…
Could we become that good at building software yet again?
-
Honestly - pretty based on their part. ↩