Artificial intelligence (AI) firm OpenAI has disbanded a team devoted to mitigating the long-term dangers of so-called artificial general intelligence (AGI).
The San Francisco-based OpenAI confirmed the end of its superalignment group on May 17. Members of the group, the dissolution of which began weeks ago, were integrated into other projects and research endeavors.
“The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers,” Yahoo News reported.
Following the disbandment of the superalignment team, OpenAI co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the technology firm. In a post on X, Sutskever said he was leaving the company after almost a decade. He praised OpenAI in the same post, describing it as a firm whose “trajectory has been nothing short of miraculous.”
“I’m confident that OpenAI will build AGI that is both safe and beneficial,” added Sutskever, about computer technology that seeks to perform as well as human cognition, if not better than it. Incidentally, Sutskever was a member of the board that voted to remove founder and CEO Sam Altman last November. Despite this, Altman was reinstated a few days later after staff and investors rebelled.
Leike’s post on X about his departure also touched on AGI. He urged all OpenAI employees to “act with the gravitas” warranted by what they are building. Leike reiterated: “OpenAI must become a safety-first AGI company.” (Related: What are the risks posed by artificial general intelligence?)
Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity’s knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Use our decentralized, blockchain-based, uncensorable free speech platform at Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.
Altman responded to Leike’s post by thanking him for his work at the company and expressing sadness over his departure. “He’s right – we have a lot more to do. We are committed to doing it,” the OpenAI CEO continued.
Disbandment and departures come amid advanced version of ChatGPT
The disbandment of the superalignment team, alongside the departures of Sutskever and Leike, came amid OpenAI releasing an advanced version of its signature ChatGPT chatbot. This advanced version, which boasts a higher performing capacity and even more human-like interactions, was made free to all users.
“It feels like AI from the movies,” Altman said in a blog post. The OpenAI CEO has previously pointed to the 2013 film “Her” as an inspiration for where he would like AI interactions to go. “Her” centers on the character Theodore Twombly (played by Joaquin Phoenix) developing a relationship with an AI named Samantha (voiced by Scarlett Johansson).
Sutskever meanwhile said during a talk at a TED AI summit in San Francisco late last year that “AGI will have a dramatic impact on every area of life.” He added that the day will come when “digital brains will become as good and even better” than that of humans.
According to WIRED magazine, research on the risks associated with more powerful AI models will now be led by John Schulman after the superalignment team’s dissolution. Schulman co-leads the team responsible for fine-tuning AI models after training.
OpenAI declined to comment on the departures of Sutskever, Leike or other members of the now-disbanded superalignment team. It also refused to comment on the future of its work on long-term AI risks.
“There is no indication that the recent departures have anything to do with OpenAI’s efforts to develop more human-like AI or to ship products,” the magazine noted. “But the latest advances do raise ethical questions around privacy, emotional manipulation and cybersecurity risks.”
Head over to Robots.news for more stories about AI and its dangers.
Watch Glenn Beck explaining the issue involving Scarlett Johansson’s accusation that OpenAI stole her voice to use in the latest version of ChatGPT.
This video is from the High Hopes channel on Brighteon.com.
More related stories:
AI pioneer warns humanity’s remaining timeline is only a few more years thanks to the risk that emerging AI tech could destroy the human race.
AI now considered an “existential risk” to the planet and humankind as atomic scientists reset Doomsday Clock at 90 seconds to midnight.
Is AI going to kill everyone? Top experts say yes, warning about “risk of extinction” similar to nuclear weapons, pandemics.
DeepLearning.AI founder warns against the dangers of AI during annual meeting of globalist WEF.
OpenAI researchers warn board that rapidly advancing AI technology threatens humanity.
Sources include:
Yahoo.com
WIRED.com
Brighteon.com
Read full article here