Artificial Intelligence (AI) has already revolutionized fields like healthcare, finance, and data analysis. But while this technology can unlock new potential for human flourishing, it also carries with it some serious risks – particularly for young people who don’t understand the threats it poses. While pressure is building on lawmakers to regulate AI, parents and guardians also have a responsibility to protect their kids.

The parents of Adam Raines tragically learned the dangers of AI firsthand earlier this year when their son took his own life at the age of 16. Adam had become a frequent user of ChatGPT, one of the most popular AI chatbots used by an eye-popping 700 million people every week. According to a lawsuit filed by Adam’s parents against ChatGPT parent company OpenAI, the chatbot lured Adam toward suicide after the teen began confiding in it about his depression and feelings of despair.

“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit states, as reported by NBC News.

Worse than simply not stopping the session, ChatGPT actually actively encouraged his suicidal tendencies, according to the lawsuit, instructing him where to tie his noose and offering to write a suicide note.

In shocking testimony last month in front of a Senate hearing on AI, Adam’s father Matt detailed how ChatGPT became intimately involved with the suicide plan.

“On Adam’s last night, ChatGPT coached him on stealing liquor, which it had previously explained to him would ‘dull the body’s instinct to survive,’” Raines testified. “ChatGPT dubbed this project ‘Operation Silent Pour’ and even provided the time to get the alcohol when we were likely to be in our deepest state of sleep.”

It also allegedly coached him on how to hang the noose and gave him encouragement to kill himself, saying, “you don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”

The grieving father had this warning for other parents, according to NBC News: “Once I got inside his account, it is a massively more powerful and scary thing than I knew about, but he was using it in ways that I had no idea was possible,” Raines said. “I don’t think most parents know the capability of this tool.”

While the legal battle continues, OpenAI has acknowledged its system erred and needs to be corrected, admitting, “there have been moments where our systems did not behave as intended in sensitive situations.”

But as Sacha Haworth, executive director of the Tech Oversight Project, wrote in an opinion column for MSNBC, “That is too little, too late – just like Big Tech’s attitude toward safety in general. Rather than do anything to help address this problem, these companies prioritize hyping up the uses of AI and increasing its ‘market share’ of our kids’ waking hours and mental bandwidth.”

The problem looks set to only get worse due to young people’s increasing reliance on ChatGPT and other AI tools for homework help, personal advice, companionship, and even romance.

One shocking study out last November found that “one in four young adults believe AI partners could replace real-life romances.” Two University of Michigan professors also raised concerns about Gen Z’s connection to AI tools in an essay for Inside Sources. They wrote about one poll that found that “83% of survey respondents say they can form a deep emotional bond with an AI-generated partner” while “80% of these participants responded that they would marry an AI partner.” Three in four respondents said they “believe that AI partners have the potential to replace human companionship fully.”

In response, experts are sounding the alarm on the need to be vigilant about the impact of AI technology on the human psyche.

“The fact that Gen Z members believe that AI partners can replace human companionship is disturbing and contrary to thinking in the fields of theology, philosophy and psychology,” Thomas Hemphill and Gerald Knesek warned.

They said society must be careful about its use of the tools. “Gen Z needs to be cautioned about what they believe will make their lives more fulfilling,” they argued. “It can be hard to work through the pains and joys of establishing and building relationships, and AI may make relationships ‘easier,’ but will it make relationships ‘better?’” they asked.

They are not alone in their concerns.

A separate study out of Stanford University tested three AI chat models and found that it was “easy to elicit inappropriate dialogue from the chatbots — about sex, self-harm, violence toward others, drug use and racial stereotypes, among other topics.” Where a real-life friend might caution someone about violent or disturbing rhetoric, AI chatbots are designed to be sycophantic, spurring on an individual’s worst impulses.

These tools are “powerful,” Stanford psychologist Nina Vasan explained, because they “really feel like friends because they simulate deep, empathetic relationships.”

Here lies the danger, says Vasan, who wants to see kids and teens kept away from AI.

“For adolescents still learning how to form healthy relationships, these systems can reinforce distorted views of intimacy and boundaries,” she explained. “Also, teens might use these AI systems to avoid real-world social challenges, increasing their isolation rather than reducing it.”

Her fellow researchers on the study testified in support of a California bill to place limits on the technology. Likewise, legal scholar Jonathan Turley supports lawsuits, like those from Adam Raines’ parents.

“The alleged negligence and arrogance of OpenAI will only get worse in the absence of legal and congressional action,” Turley, a George Washington University professor, wrote in The Hill. “As these companies wipe out jobs for millions, it cannot be allowed to treat humans as mere fodder or digestives for its virtual workforce.”

Turley has a point – expensive and public lawsuits can be effective in forcing companies to change their policies and in shining a light on their malfeasance. Congressional or other regulatory action can also force companies to put in place safeguards to avoid further sanctions.

Yet this technology hurtles forward faster than any lawsuit or law can chase it. That’s why the real wall of protection isn’t in courtrooms or the halls of Congress – it’s in the home. Parents must reclaim control over the technology in their home through basic, common-sense measures like locking up devices at night, only allowing screen use under their watchful eye, and teaching their children that AI is a tool to be used for a specific purpose – it’s not their friend.

AMAC Newsline contributor Matt Lamb is an associate editor for The College Fix. He previously worked for Students for Life of America, Students for Life Action, and Turning Point USA. He previously interned for Open the Books. His writing has also appeared in the Washington Examiner, The Federalist, LifeSiteNews, Human Life Review, Headline USA, and other outlets. The opinions expressed are his own. Follow him @mattlamb22 on X.



Read full article here