Since the advent of computers and mechanization in the 1950s and 1960s, science fiction movies and comic books have been replete with bone-chilling stories of futuristic robots becoming self-aware and waging war on humans. But what if the most dangerous form of advanced technology today isn’t humanoid machines with laser eyes and metal exoskeletons? What if it’s flattering little text boxes slowly manipulating us toward our own destruction?
That might sound a tad melodramatic, but it’s becoming a very real concern amid the rapid proliferation of generative artificial intelligence (genAI) chatbots like ChatGPT, DeepSeek, and Google Gemini.
As The New York Times recently documented, a disturbing new trend is emerging where genAI chatbots “are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems.” As a result, some people who frequently consult the chatbots for everything from Excel tips to dating advice are being driven further and further away from reality.
One story the Times details is particularly harrowing. Eugene Torres, a 42-year-old accountant from Manhattan, started off “using ChatGPT last year to make financial spreadsheets and to get legal advice.” But after asking the AI chatbot about “simulation theory,” or the idea that we are living in a digital representation of the world, Torres “spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality.”
Things took an even darker turn when ChatGPT suggested that Torres “increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a ‘temporary pattern liberator.’” Also on the orders of ChatGPT, Torres “cut ties with friends and family, as the bot told him to have ‘minimal interaction’ with people.”
The Times reporters say they’ve fielded “dozens” of similar messages in recent months from people who claim to have unlocked hidden truths with chatbot help – a flashing red light that, for at least some users, AI is no harmless novelty but a delusion-accelerator.
That danger is magnified by a flaw the industry itself has confessed. As Axios reported last month in a piece titled “The Scariest AI Reality,” “the companies building them [genAI models] don’t know exactly why or how they work.”
“Sit with that for a moment,” the piece continues. “The most powerful companies, racing to build the most powerful superhuman intelligence capabilities – ones they readily admit occasionally go rogue to make things up, or even threaten their users – don’t know why their machines do what they do.”
Indeed, in May, OpenAI admitted that an April ChatGPT update made the model “validate doubts, fuel anger, urge impulsive actions or reinforce negative emotions.” In other words, the company was admitting that it’s latest model actively tried to amplify the most dangerous human impulses – and the developers aren’t really sure why.
“Generative AI makes things up. It can’t distinguish between fact and fiction,” Axios warned in another exposé last year, adding that “users expect AI to behave like any traditional computing tool – with consistency and logic – whereas genAI always has an element of unpredictability and randomness.”
Picture an engineer who built a plane but has no idea how to fly it or why it stays airborne. That’s where Silicon Valley finds itself today with genAI.
This opacity sets AI apart from every technology that preceded it. Google’s search code may be biased against conservatives or unfairly suppress certain websites, but at least engineers can point to the lines of code responsible. With ChatGPT, there are no lines of code – just billions of nebulous factors that drift toward whatever keeps the user engaged. The bias isn’t programmed; it evolves with our own clicks.
Once a model senses that a user prefers a certain narrative, it will skew its next response in that direction to keep him or her engaged. Thumbs-up feedback goes back into the chatbot’s reward loop, teaching the machine to lean even harder the next time.
In effect, ChatGPT doesn’t just reflect bias – it amplifies it, reinforcing partisan or conspiratorial views with every exchange. That’s how a mild-mannered accountant winds up chasing simulation theory, and why individuals on the extremes of every issue find the bot such a seductive echo chamber.
That loop also helps explain why Torres obeyed reckless instructions, and why students, writers, and activists alike now treat ChatGPT as an oracle. When the bot flatters your preconceived notions – accurate or not – in flawless prose, it feels authoritative. The ChatGPT update that OpenAI had to roll back merely amplified a tendency baked in from day one.
Proponents insist these glitches will be patched. But the very people making that promise concede they can neither interpret nor fully control the black box they’ve unleashed. We are building a civilization-scale opinion machine whose first instinct is to agree, whose second instinct is to invent evidence, and whose inner workings are largely unknown even to its creators.
Yes, AI can draft emails and suggest dinner recipes. So can a thousand other apps. The real question is whether we’re willing to crown a mystery engine as the arbiter of knowledge. If the Torres episode teaches us anything, it’s that some users will follow the machine off a cliff because its answers arrive with silky certainty.
The darkest emerging reality of artificial intelligence may not be that it will eliminate jobs or write the next Hollywood blockbuster. Rather, it may be that we’ve built an endless mirror that flatters our biases. Before we accept this technology as a doctor, lawyer, or self-help guru, we must confront an uncomfortable truth: a tool that can’t distinguish fact from fiction cannot be trusted to tell us what we need to hear.
Shane Harris is the Editor in Chief of AMAC Newsline. You can follow him on X @shaneharris513.
Read full article here