Aiaiaiaiaiaiaiaiai
- 18 min read

No, a cat didn’t just step on my keyboard.
There is a lot about the hottest topic in (every) town that is just… confusing.
Every single product out there has to have AI in it. Even a toothbrush. This must mean that everyone wants it and customers are super eager to pay extra for this feature. Right? Right…? Well, the research shows otherwise. When a product is marketed using AI as a selling point, the customers aren’t excited to buy it. Apparently mentioning “artificial intelligence” makes people suspicious. So why do the executives insist on this line of advertising? Are they really so out of touch?
From my own experience (anecdotal, I know), for every person enthusiastic about the topic, there seems to be a person who claims AI is useless. But the majority of people I talk to don’t have strong opinions at all. When forced to talk about it, they just repeat what we all read in the news. They dabble. They fool around with chatbots. How is that for a revolution that’s supposed to be changing everyone’s lives?
In the area the closest to my heart - software, the research shows a contradiction. Senior software engineers believe the LLMs are making them faster, but in fact, the opposite is true. DORA report from last year corroborates this observation. It appears that individuals get a lot of value from using the LLMs, but this doesn’t translate to the whole team, let alone the whole business. Now, there is something to be said about trying to automate a part of the process that is not a bottleneck - it makes the actual bottlenecks worse. I believe using AI for writing code firmly falls into that category. But why don’t teams use this clearly powerful technology to help with typical process bottlenecks, like testing or collecting the requirements or feedback from the customers? And if they do, why don’t they scream about it from the rooftops?
I’ve observed another puzzling thing. I’m in this interesting position of frequently talking to experts in domains that don’t often mix. When talking to marketers about AI, many are somewhat dismissive. I hear that LLMs can help you with simple things, like maybe writing, but if you want to do marketing well, you cannot rely on AI. I know what they’re talking about, I’ve seen the attempts.
When talking to professional writers, they are somewhat dismissive as well. I hear that LLMs can help you with something that requires no nuance, like maybe programming, but if you are serious about writing, you cannot rely on AI. I also understand this point of view, I’ve witnessed the slop myself.
When talking to experienced software programmers, they often get rather snarky about AI. I hear that LLMs can help you with basic things, or maybe some of that creative stuff like painting, but if you want to write good software, you cannot rely on AI. Right, I understand that very well, I’ve seen the problematic results of vibe coding too many times.
When talking to artists, they obviously are outraged about the plagiarism, but going deeper, they are dismissive too. I hear that LLMs can help you with more mundane endeavors appealing to the masses, like marketing, but… wait a second… we just made a full circle.
When it comes to relying on AI to do their jobs, it’s like every expert is saying “not me” and pointing to those other experts, spiderman-double meme style.
At the same time, stay at home moms love this technology, it helps them with dreadful meal planning. People who like their vacations busy also love it, it creates efficient itineraries, saving time and money. Students love their LLMs, they write all those boring essays for them. People at work are ecstatic, AI converts a bunch of bullet points into full prose emails and documents for their colleagues (who then use said AI to summarize and distill it back into bullet points, hopefully similar enough to the ones we started with).
If you’re paying attention, a pattern starts to emerge.
People almost universally report great experience with LLMs for things where the outcomes don’t matter too much and they’re willing to settle for an average, or even slightly below average result, as long as they get to it fast.
These aren’t the groundbreaking innovations. They definitely aren’t the “next level”.
Floor or ceiling?
If you think about it, it makes sense. The currently hyped AI technology works by finding commonalities, clusters of things that are similar in context. It smears and smudges and mixes, based on a lot of very varied text. That’s why the default output of any LLM chatbot sounds so generic and bland. It’s an averaging machine. Consequently, the results it returns are, at best, average. If we add some hallucinations and imperfect prompts, the day to day results are slightly below average. If you’re an expert or working on something that requires a lot of nuance, slightly below average output is just not good enough. It may even seem outrageously bad. But if you’re inexperienced with very little knowledge on the topic, slightly below average is so much more than what you have. In fact, getting from zero to slightly below average in mere seconds is an amazing result!
If you’re an expert in something, it’s really easy to be dismissive of all these “simple” tasks being accomplished with a slightly worse than average result. But being dismissive makes us overlook something very important and, frankly, rather awesome. Aside from giving us an unprecedented ability to automate waste at scale, the current iteration of AI is a tool for raising the floor, the minimum bar. It enables everyone to become somewhat passably proficient in a wide array of topics. We no longer need to spend months or years on learning the basics, we can get there instantly.
This was observed in a study published earlier this year, where the researchers noted that “areas with lower educational attainment showed somewhat higher LLM adoption rates” and “younger, less experienced workers may be more likely to use ChatGPT”.
So looking from a perspective of large groups, we’re talking about raising the floor rather than the ceiling of our collective abilities. That’s really amazing. And that’s where I see a problem.
The current iteration of AI, the one using large language models - LLMs (I'm not talking about the much older and more developed discipline of machine learning) is being sold to us as a huge breakthrough, a marvel of technology that will usher a new era of humanity, with the subtext of prosperity, health, and world peace. It is implied that AI is raising the ceiling, making us able to collectively achieve more than we could have without it. This is more than just hype. It’s completely distorting the narrative around what the technology is capable of.
This is where, I believe, a lot of the discrepancies around the topic originate. We have overwhelming evidence that AI is making individuals better *in areas where they’re not experts in, or which don’t require expertise to begin with. At the same time we are bombarded with suggestions that AI enables things that weren’t possible before, which may still hold on an individual level, but the implication is this applies to all of us taken collectively, which simply isn’t true. We have no evidence whatsoever that AI is raising the ceiling, the upper limit of what groups of people can achieve. I’m not greedy, so I’m rather content with AI as it really is. I think raising the floor is great and will benefit humanity in the long term. But you can’t really get rich quick from things of this nature. That’s why the messaging around it is what it is.
Unicorns gonna unicorn
What we’re looking at is a typical Silicon Valley Startup story arc playing out. It can all be traced to a single company - OpenAI. While they have started as a non-profit intending to innovate, right now they are in full unicorn-peacock mode. They took a lot of investors’ money, and the expectation is that this will bring a 10x return (ok, I’m simplifying a bit here, expected return varies by stage and is between 100x to 5x). It is very hard to pull off if you’re selling something that produces slightly below average outputs very fast and for things that don’t matter much. The investors are truly expecting a unicorn. This can be normally achieved in a few ways:
- The company becomes very profitable in a short amount of time
- The company gets sold at a price that’s 10x of what the investors put in
- The next round of investors (this means the general public if we’re talking about the IPO) believes the company is now worth 10x more than what it was at the last round of investment.
The first way requires a lot of hard work and a lot of luck. It almost never happens. The most common way to achieve that 10x investor return is through (2) and (3). In practice both of these mean that the company has to convince future investors that they’re onto something big, with potential so immense that it will revolutionize everything. See how that fits the current messaging around the AI?
All those executives cramming AI into their products even against their customers’ wishes aren’t doing it for the customers. They’re doing it for the shareholders. That’s the only thing that makes sense. Yes, our newest toothbrush is truly revolutionary, it’s a next-level genius, now give us more money! If I had an opportunity to make a good impression on the shareholders by simply riding on a message that some other company keeps pouring millions into, I would.
Companies can get away with claims that the AI is raising the ceiling, even though we have zero evidence for it so far, because how the technology works seems very murky to an average person. Even for technical people it appears so. This is further muddied by the fact that all experts in data science seem to insist on using new words for old stuff or old words for new stuff. And humans (this includes the executives) have this feature that we fill the unknowns with our hopes and fears. Instead of embracing the LLMs for what they are - a great tool for getting us to a bare-minimum outcome really fast from scratch, we take that experience and extrapolate it into areas that truly require expertise or higher than average outcomes. That extrapolation is highly encouraged by companies that need to convince their investors about the potential of this new, very expensive technology.
And expensive it is. Looking at the economy side of AI tech, there is much to be confused about.
Unit economics
It costs a lot to create a model (the cheapest one is supposedly “only” $6 mil). You need some specialized hardware, you need a high amount of labeled and de-noised data. Once you have it, you still need specialized hardware (or a lot of patience) to run it. This doesn’t seem that bad on the surface until we look closer at unit economics.
For every $1 of revenue, OpenAI pays $2.25 in costs. So far, the revenue grows almost in lockstep with the costs. Famously, they're bleeding money every time a user responds to ChatGPT with “Thank you!”. And most people are happy to use ChatGPT, as long as it’s free, or at least extremely cheap. Without some sort of technological breakthrough, this company is never going to be profitable.
But let’s remember, OpenAI is playing the standard Silicon Valley game, where the goal is the IPO or acquisition rather than profitability. Their focus is to burn investors' money to secure a significant market share (well, a monopoly really), then slowly figure out how to take the money out. Which can take years after IPO and almost certainly involves enshitiffication, because that’s what you do once you have a monopoly. In this situation, the only ingredient necessary for the company’s success is convincing a lot of people that their product is awesome. It doesn’t even need to be awesome, as long as it sort of feels that way and a lot of people sign up.
Ok, so OpenAI has to be burning money to sustain the hype, because that’s the best currently known Silicon Valley startup playbook. Other companies get onto the hype train because that’s what the investors and shareholders are riding. That's why there is a lot of marketing and PR effort, backed by a lot of money to paint a picture of AI as something akin to magic, while all it's good for is giving us a slightly below average output really fast without requiring prior experience.
OpenAI is not the only company in this game though. There are other companies that should be well positioned to take over the market share once the tech matures enough to be profitable. And their behavior seems to be consistent with that plan. If I was leading OpenAI I would be rather worried that as soon as we figure out how to make our margins work, other companies will have found the same optimizations. I'd be worried there's nothing letting us keep that market share. I also wouldn't be very motivated to pursue innovation leading to cost reduction, knowing full well that everyone else is in a better position to make use of it. At the same time, investors need to believe that the 10x return is possible. This is a tough position to be in.
You would think that the way to secure the dominant market position would be to come up with patentable advancements in the technology itself, have your models be so good that no one wants to use anything else. It would be logical to hunker down, pour more money into great teams, start releasing experiments, innovate on the product. That’s how OpenAI started. But they don’t seem to operate that way anymore. Being (or aspiring to be) a unicorn doesn’t permit that. It would require sacrificing some of the hype-feeding budget and admitting that the tech is not yet ready. So who is truly innovating?
Innovators dilemma
Microsoft (that could easily just gobble up OpenAI if they choose to do so), Google, and AWS (Amazon) seem to be playing the “let’s not get left behind” game rather than innovate. That’s understandable, they have huge existing user bases they can't alienate (too much). They already have a market share they stand to lose if they do something too risky. But they cannot stand still, or else they will lose market share to the innovators. At the very least they need to reassure their shareholders that they won’t become irrelevant, so they spend a lot of effort on managing the shareholder perceptions.
If you think the innovators here are the startups, the likes of OpenAI, think again. OpenAI used to innovate, but now they’re just a money making machine, that’s what the investors expect at Series F. There are a bunch of other, “smaller” companies (by money invested) that could be innovating. But it’s the same investors funding them. They have the same expectations. And we can see those expectations very clearly in the definition of Artificial General Intelligence (AGI) that was agreed upon by Microsoft and OpenAI - it needs to make a lot of money, and do it quickly.
OpenAI, as well as the majority of well-funded startups playing in the AI space are not high-tech businesses. They are financial instruments. Their main purpose is securing a big fat return for the investors, and not innovation. Innovation is what happens before an idea becomes interesting to the investors. And none of the AI-based startups seem to have anything different than a slight variation on a Large Language Model. We have reached the peak of the S-curve here, now it’s time to grab the market share, then monetize.
That’s why I’m very skeptical when people say “but it will get better!” People seem to trust that the gap between the reality (LLMs raising the floor, giving us slightly below average outcomes) and the promise (LLMs raising the ceiling, replacing the experts) will somehow magically get closed, as if it’s inevitable. Yet I see no market forces that would produce that outcome. I see plenty of market forces that could potentially sink the whole thing though, throwing the baby with the bath water.
But it will get better, right? Right…?
If I got 5 rappen every time someone confidently says “but AI will only get better!”, I would… actually I don't know what I would do with that extra money, I don't want a yacht. The belief that something about the technology will suddenly transform it from a floor raising one (letting non-experts achieve passable outcomes that would normally require some expertise) into a ceiling raising one (so we can replace the experts altogether) is a product of the sustained marketing campaign by AI companies. The fundamental nature of this tech is not going to change. It cannot do that without some sort of major breakthrough. And no one is currently investing in such breakthroughs, instead we're pouring money into managing perceptions.
I think it would be smart to embrace the current iteration of AI for what it is - a little helper that can give us a head start in places where we don’t need the expertise. Make that prototype faster, then throw it away without regrets. Create a first revision of a document, or better still - an outline. Auto-generate captions, or alt-text, then do a quick review (no more excuses that accessibility features are expensive). There are plenty of possibilities that may not be your next paradigm shift, but are little things that make everyone’s lives just a little easier, bit by bit.
Instead, drunk on the unicorn juice (ugh, this came out too naughty… Americans would call it “koolaid” I guess?), people believe that LLMs are some miracle workers able to replicate experts’ work. This is regrettable. Because the technology is so good at giving a passable outcome (if you’re not an expert), and people who benefit from it the most are not the experts, they can’t tell the difference. In low-value areas, it doesn’t matter, slightly below average may be good enough. But we see it being applied in cases that need real expertise.
Executives who don’t understand the nuances of work being done are forcing their employees to use AI, or else… (if AI was useful at work, people would clamor to adopt it, wouldn’t they?) We see some wonderful “innovations”, like fake case-law citations put before the courts, non-existing medical studies used to decide on FDA drug approval, or production database wiped and replaced with fake data by an AI agent. We’re at the point where people are paid a lot of money to fix mistakes caused by the AI. I have witnessed myself how junior engineers, about to do something really stupid, confidently argue with their seasoned team lead that this is the way, because “ChatGPT says so!”
This really isn’t good… And things like this won’t stop as long as people are convinced that AI is ready, or will soon™ be, to do an expert’s work.
This certainly looks like a bubble is forming. And bubbles have one thing in common - they burst. What will be the shape of the fallout once this one pops? I have some ideas and they don’t all fill me with optimism. But this rant is already long enough, so I’m going to leave speculating about the future …for the future.