Doom and Gloom
Why does this keep happening?
Here I am in my late 60s having the time of my life with AI. It’s a bit like Christmas every day. However, it’s not how most of the world sees AI.
Yesterday I read another “the sky is falling” article about the dangers of AI and what’s ahead. It’s all the usual empty statements: big changes ahead, we’ve never been here before, people are going to be out of work, and how the AI held in so few people’s hands is dangerous. Variations of this show up constantly in places like The Atlantic, The New York Times opinion pages, and academic think pieces warning about “existential risk.” In most cases, these writers tend to lean left, which I find to be a curiosity all by itself. The loudest warnings about centralized power, inequality, and labor displacement almost always come from that direction.
I look at history and pattern recognition as my guide. You could take all these same statements and apply them to any leap in technology in the last fifty years. When the IBM PC hit store shelves in 1981, critics warned of job destruction and corporate control. When Microsoft standardized office software, people said it would eliminate clerical work. When ATMs spread across the country, bank teller jobs were supposed to vanish. Instead, banks hired more people because branches expanded and services grew.
In every instance, the world got better, not worse. The internet was supposed to destroy journalism. Instead, it created entirely new media ecosystems. The smartphone was supposed to isolate us. Instead, it enabled rideshare companies, mobile banking, and global micro-businesses. AI will put more people in jobs than it takes out. That’s the only constant in technological leaps.
Almost every sci-fi movie in the last twenty years sees the future with doom and gloom. The Terminator franchise, The Matrix, Ex Machina, I, Robot, Avengers: Age of Ultron. The machines either enslave us, outsmart us, or wipe us out. Even Wall-E paints a future of human atrophy. It would be helpful to point to recent mainstream films that portray advanced AI as a net positive force for human flourishing. It’s a very short list, if any.
I was in my college internship when the first IBM PC hit store shelves, and I remember the hysteria very well. People thought middle management would disappear. They thought companies would automate decision-making and hollow out offices. What actually happened was productivity expanded and entirely new sectors emerged: IT services, cybersecurity, enterprise software, networking, cloud infrastructure. The labor shifted, but it didn’t evaporate.
I’ve written before about how it was the cell phone, then the internet, and so on, and in every case people were scared. Replace “AI” in today’s fear articles with “internet” circa 1995 or “PC” circa 1981 and you would barely have to change the paragraph structure.
Meanwhile, I’m using AI daily in ways that would have been impossible a few years ago. I use it to pressure-test strategic thinking. I use it to compare research perspectives in minutes instead of hours. At Hudson Cloud, even under NDA, I can say that AI is accelerating architectural planning, scenario modeling, and documentation review. It’s not replacing us. It’s making us sharper. It’s compressing time.
One of my doctors snapped at me about errors found in ChatGPT. I just smiled. I could tell it was scaring the poop out of him. He likely doesn’t realize that these systems improve continuously and rapidly. An error one week often becomes non-replicable the next. Yes, they still make mistakes. So do physicians. So do consultants. So do executives. The difference is that these systems iterate at machine speed.
I keep thinking about how that one doctor is going to be left behind. Those are the people who face real danger. Not because AI will replace them tomorrow, but because they are refusing to engage with the tool that will reshape their profession. If you wait five years and then try to catch up, you’re not catching up. You’re starting at zero in a world that has moved on.
While AI is killing off some task repetition in labor, I think of all the new companies it now enables. A single founder can now research markets, draft legal language, build early-stage code with assistance, generate marketing material, and simulate financial models without a full staff. That lowers the barrier to entry. That creates companies. Companies create jobs.
The most important thing you can do right now, right this minute, is sign up for a paid subscription with your favorite AI, or more than one. It’s the best $20 you ever spent. Treat it like a gym membership for your mind. Have fun with it and, most of all, let your imagination run wild in the conversation. Challenge it. Push it. Argue with it. I’d be astonished if it didn’t dramatically improve your life, and it’s only going to get better.
I’ve always been a pathological optimist by nature, which has led me to far more success than failure. I see all of this with wonder and amazement. This isn’t the end of work. It’s the beginning of a new way of working.
This is such a great time to be alive, and I can’t wait for even more fun.


I don't give a lot of thoughts to the limitations of LLMs. I know that when I can't get what I need, I should try again in a few weeks. It's improving overnight and it happens without much fanfare. It's continually smarter. I never gave much thought to a calculator. I only cared if it could do the calculation and come up with the correct answer and LLMs don't always nail the right answer but they sure get me close. This morning it analyzed my E@RTC blog and it did a masterful job of pattern recognition in my writing and it laid it all out. It was wonderful and it would otherwise take someone months to do the same work it did in five minutes. It pointed me in a new direction and that was delightful.
AI does not "understand" the most powerful intellectual tools in the physical sciences. They're called models of nature.
If you inherited a hundred ounces of pure elemental gold, you could affirm that by sending samples to a dozen analytical labs. Each sample sent as an "unknown" would be a valid experiment. And, each would have 11 others corroborating.
By contrast, two psychiatrists separately examining the same patient have only two chances in three of coming to the same diagnosis. Add a 3rd psychiatrist and the likelihood *drops,* to one chance in three. (Coping With Psychiatric and Psychological Testimony / Ziskin)
Nature yields up its complex reality slowly and unevenly. AI is a text search , connected to a "rules engine." Its methods' contrast with physical science research is stark.
AI will instruct us by overreach. The biggest hedge against AI abuse? The US tort system, which annually awards more damages as a percent of GDP than any other nation.