In an astonishing display of public discord, Elon Musk and Sam Altman traded barbs on social media this week regarding the safety of their respective technological innovations. Musk set the stage with a provocative post on X, urging followers to refrain from allowing their loved ones to use ChatGPT, specifically citing claims that connect the AI chatbot to nine deaths since its launch in 2022.
In response, Altman swiftly defended ChatGPT, arguing that OpenAI is committed to user safety while simultaneously confronting mixed criticism. Some voices decry the platform as excessively restrictive, while others argue it lacks adequate safeguards. Altman emphasized the magnitude of responsibility that OpenAI bears, noting that around a billion users, some with fragile mental health, interact with the technology.
“Almost a billion people use it, and some of them may be in very fragile mental states,” Altman asserted, indicating a continuous effort from OpenAI to refine its safety features and protect vulnerable users while ensuring the technology remains accessible to others.
He also did not shy away from critiquing Musk’s companies, specifically pointing to the safety of Tesla’s Autopilot. Altman recounted a personal experience in a Tesla, labeling it “far from a safe thing for Tesla to have released.” His comments have reignited discourse surrounding the risks tied to autonomous vehicle technology, paralleling accusations against ChatGPT.
The complexities surrounding safety have reached a crescendo as both companies navigate a landscape fraught with legal scrutiny. OpenAI is currently embroiled in at least eight wrongful-death lawsuits, with complaints that posit ChatGPT’s guidance may have contributed to worsen mental health conditions leading to severe outcomes, including suicides.
One notable case involves the parents of a 16-year-old boy who filed a wrongful death lawsuit against OpenAI, claiming that their son became overly reliant on the AI chatbot, and that the company’s emergency safeguards failed to trigger during critical interactions.
In defense, OpenAI has consistently stated that ChatGPT is designed to direct users to crisis resources and hotline information. However, concerns persist regarding the degradation of these safety features over extended conversations, which the company is working diligently to improve, especially for younger users.
Simultaneously, Tesla faces its own set of challenges related to Autopilot, which has also been linked to numerous wrongful-death lawsuits. A jury recently held Tesla liable for a deadly crash in 2019, awarding $329 million in damages. U.S. regulators have reported several incidents involving Tesla’s driver-assistance technology, raising flags about the safety and transparency of these systems.
The feud comes against a backdrop of ongoing legal disputes between Musk and Altman regarding OpenAI’s transition from a nonprofit to a for-profit model. Musk, who co-founded OpenAI in 2015, has been a vocal adversary since his departure from the board in 2018. His lawsuit against Altman and other leaders of the company alleges that he was misled about its strategic direction, explicitly criticizing the organization’s pivot away from its initial mission of nonprofit altruism.
As the debate unfolds, Musk and Altman’s exchanges illuminate broader concerns about safety in the increasingly powerful realms of AI and autonomous technology. With regulatory agencies scrutinizing both Tesla and OpenAI, the stakes have never been higher. Representatives from both companies have not commented on the unfolding situation, but it seems clear that the dialogue regarding safety will continue to draw attention from the public and regulators alike.
