World becomes a better place when I see headlines about collapse of "AI" bubble.
World becomes a better place when I see headlines about collapse of "AI" bubble.
@the_roamer In some cases I've seen folks use #AIHype
「 When one user asked it to produce a map of the U.S. with all the states labeled, GPT-5 extruded a fantasyland, including states such as Tonnessee, Mississipo and West Wigina 」
「 A couple of days before Huang visited the White House, Palantir released a positive earnings report. By the end of the week, according to the Yahoo Finance database, the market was valuing the company at more than six hundred times its earnings from the past twelve months, and at about a hundred and thirty times its sales in that same time span. Even during the late nineties, figures like these would have raised eyebrows 」
https://www.newyorker.com/news/the-financial-page/is-the-ai-boom-turning-into-an-ai-bubble
MIT report: 95% of generative AI pilots at companies are failing | Fortune
「 While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows」
https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
1/ Back in the ‘90s, every company wanted a “.com” in their name. Today, it’s “AI-powered.” Hype drives valuations—$100B+ poured into AI in 2024, like the dot-com frenzy. But is it all hot air? #DotComBubble #AIHype
"What if generative AI isn’t God in the machine or vaporware? What if it’s just good enough, useful to many without being revolutionary? Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap. What if they never stop hallucinating and never develop the kind of creative ingenuity that powers actual human intelligence?
The models being good enough doesn’t mean that the industry collapses overnight or that the technology is useless (though it could). The technology may still do an excellent job of making our educational system irrelevant, leaving a generation reliant on getting answers from a chatbot instead of thinking for themselves, without the promised advantage of a sentient bot that invents cancer cures.
Good enough has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. What if the real doomer scenario is that we pollute the internet and the planet, reorient our economy and leverage ourselves, outsource big chunks of our minds, realign our geopolitics and culture, and fight endlessly over a technology that never comes close to delivering on its grandest promises? What if we spend so much time waiting and arguing that we fail to marshal our energy toward addressing the problems that exist here and now? That would be a tragedy—the product of a mass delusion. What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane."
https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/
"Companies are betting on AI—yet nearly all enterprise pilots are stuck at the starting line.
The GenAI Divide: State of AI in Business 2025, a new report published by MIT’s NANDA initiative, reveals that while generative AI holds promise for enterprises, most initiatives to drive rapid revenue growth are falling flat.
Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.
(...)
[F]or 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows..."
https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
"Even if we manage to snap out of the AI bubble, we are never going to get these years back. I can only be left to wonder what could have been"
—@emi_cpl
Nonsense.
A stochastic (random) behavior that happen to resemble intentional behavior or sentience...is still a random behavior. And the studies that many of these instances point to are cooked. Until someone shares their training data, there is no reason to believe any demo.
#AIHype #Doomvertising to maintain the #AIBubble
"Some AI researchers say that the overwhelming focus on scaling large language models and transformers — the architecture underpinning the technology which was created by Google in 2016 — has itself had a limiting effect, coming at the expense of other approaches.
“We are entering a phase of diminishing return with pure LLMs trained with text,” says Yann LeCun, Meta’s chief scientist, who is considered one of the “godfathers” of modern AI. “But we are definitely not hitting a ceiling with deep-learning-based AI systems trained to understand the real world through video and other modalities.”
These so-called world models are trained on elements of the physical world beyond language, and are able to plan, reason and have persistent memory. The new architecture could yet drive forward progress in self-driving cars, robotics or even sophisticated AI assistants.
“There are huge areas for improvement . . . but we need new strategies to get [there],” says Joelle Pineau, the former Meta AI research lead now chief AI officer at start-up Cohere. “Simply continuing to add compute and targeting theoretical AGI won’t be enough.”"
https://www.ft.com/content/d01290c9-cc92-4c1f-bd70-ac332cd40f94
I decided I'd try Claude Code out because I find the model helpful in answering some of the questions I sometimes have, and creating simple code modules from scratch. So, I got myself some Anthropic credits and set up the CLI tool. Then, I basically burned through 5 USD of credits in order to get a half-baked mess which I could have written in a shorter time. No, thanks. #aihype #claude #ai #llm
LLMs are going to end up like: Segway - “it will change transport and cities worldwide forever!”
Time passes…
“Erm ok it will be useful for mall cops and tourists only.”
“Mostly.”
An excellent, though long, essay on what's really happening with spending on building out Data Centers for AI and how there is no hope to every pay it back
#AIHype
https://www.wheresyoured.at/ai-is-a-money-trap/?ref=ed-zitrons-wheres-your-ed-at-newsletter
We don't need more. We need less.
Every week: A new framework.
A new "layer".
A new AI wrapper.
A new YAML format to abstract what used to be a shell script.
And then we wonder:
"Why is our software hard to debug?"
"Why do our builds break randomly?"
"Why is onboarding a 6-month journey through tribal folklore?"
I once said I write bug-free software that can be finished.
People laughed, especially product people.
Not because it's wrong.
But because they’ve forgotten it's possible.
We build complexity on top of confusion:
A + B becomes C.
C + D becomes E.
Now, E is broken, and we would create a new layer, but nobody knows how A or B worked in the first place. For example HTML/JavaScript, we leave it there and just add layers around it.
Take XML.
Everyone says it's ugly.
But you could validate it automatically, generate diagrams, enforce structure.
Now we're parsing YAML with 7 linters and still can't tell if a space is a bug.
Take Gradle.
You can define catalogues, versioning, and settings, but can't update a dependency without reading 3 blogs and sacrificing a goat.
This is called "developer experience" now?
Take Spring Boot.
I wouldn't trust a Spring Boot or any java Framework powered airplane.
Too many CVEs. Too much magic. Too little control.
We don't need "smarter" tools.
We need dumber, boring, reliable defaults.
Start boring.
Start small.
Then only change the 1% that needs to be fast, clever, or shiny.
You'll rarely even reach that point.
Like everyone says, "Y is more performant and faster than X", but no one reached the limit of X. Why should I care? Meanwhile, we use performant AI.
Real engineering is not chasing hype.
It's understanding the system so deeply that you no longer need most of it.
We've replaced curiosity with cargo cults.
We've replaced learning with LLM prompting.
And somehow, we're surprised when AI loses to a 1980s Atari in a chess game.
At least the Atari understood its own memory.
Simplicity = less maintenance = fewer bugs = happier teams.
We need less. Not more.
#devex #simplicity #softwareengineering #nocodependency#stopthehype #bugfree #springboot #gradle #xml #yamlhell #boringisgood #minimalism #AIhype #infrastructure #cleancode #pragmatism #java #NanoNative
Mistery AI Hype Theater 3000: you thought #vibecoding was bad? Now try a security analysis with a sprinkle of Dante's AIferno!
via @dair
https://peertube.dair-institute.org/w/m5Hb4QbhZWkssGu4kmWbgT