Close Menu
Mirror Brief

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    GoPro, Krispy Kreme join the meme party as Wall Street speculation ramps up

    July 23, 2025

    Texas Instruments stock falls 12% as CEO warns of tariff concerns

    July 23, 2025

    Bandmates and fans pay tribute to Black Sabbath singer

    July 23, 2025
    Facebook X (Twitter) Instagram
    Mirror BriefMirror Brief
    Trending
    • GoPro, Krispy Kreme join the meme party as Wall Street speculation ramps up
    • Texas Instruments stock falls 12% as CEO warns of tariff concerns
    • Bandmates and fans pay tribute to Black Sabbath singer
    • London Fashion Week unveils first schedule under new strategy
    • 8-4 teams in the CFP? Big Ten commissioner Tony Petitti explains vision for college football postseason
    • 11 Best Villas in Punta Cana for Destination Weddings, Group Trips, & Family Travel
    • EU steps up air defences for Ukraine and sanctions for Russia | Russia-Ukraine war News
    • EU prepares €100bn no-deal plan to match Trump’s threat of 30% tariffs | International trade
    Wednesday, July 23
    • Home
    • Business
    • Health
    • Lifestyle
    • Politics
    • Science
    • Sports
    • World
    • Travel
    • Technology
    • Entertainment
    Mirror Brief
    Home»Science»Why I’m Suing OpenAI, the Creator of ChatGPT
    Science

    Why I’m Suing OpenAI, the Creator of ChatGPT

    By Emma ReynoldsJuly 22, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Why I’m Suing OpenAI, the Creator of ChatGPT
    Share
    Facebook Twitter LinkedIn Pinterest Email

    “I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones,” wrote New York Times technology columnist Kevin Roose in March, “and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems.”

    He’s right. That’s why I recently filed a federal lawsuit against OpenAI seeking a temporary restraining order to prevent the company from deploying its products, such as ChatGPT, in the state of Hawaii, where I live, until it can demonstrate the legitimate safety measures that the company has itself called for from its “large language model.”

    We are at a pivotal moment. Leaders in AI development—including OpenAI’s own CEO Sam Altman—have acknowledged the existential risks posed by increasingly capable AI systems. In June 2015, Altman stated: “I think AI will probably, most likely, sort of lead to the end of the world, but in the meantime, there’ll be great companies created with serious machine learning.” Yes, he was probably joking—but it’s not a joke.


    On supporting science journalism

    If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


    Eight years later, in May 2023, more than 1,000 technology leaders, including Altman himself, signed an open letter comparing AI risks to other existential threats like climate change and pandemics. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the letter, released by the Center for AI Safety, a California nonprofit, says in its entirety.

    I’m at the end of my rope. For the past two years, I’ve tried to work with state legislators to develop regulatory frameworks for artificial intelligence in Hawaii. These efforts sought to create an Office of AI Safety and implement the precautionary principle in AI regulation, which means taking action before the actual harm materializes, because it may be too late if we wait. Unfortunately, despite collaboration with key senators and committee chairs, my state legislative efforts died early after being introduced. And in the meantime, the Trump administration has rolled back almost every aspect of federal AI regulation and has essentially put on ice the international treaty effort that began with the Bletchley Declaration in 2023. At no level of government are there any safeguards for the use of AI systems in Hawaii.

    Despite their previous statements, OpenAI has abandoned its key safety commitments, including walking back its “superalignment” initiative that promised to dedicate 20 percent of computational resources to safety research, and late last year, reversing its prohibition on military applications. Its critical safety researchers have left, including co-founder Ilya Sutskever and Jan Leike, who publicly stated in May 2024, “Over the past years, safety culture and processes have taken a backseat to shiny products.” The company’s governance structure was fundamentally altered during a November 2023 leadership crisis, as the reconstituted board removed important safety-focused oversight mechanisms. Most recently, in April, OpenAI eliminated guardrails against misinformation and disinformation, opening the door to releasing “high risk” and “critical risk” AI models, “possibly helping to swing elections or create highly effective propaganda campaigns,” according to Fortune magazine.

    In its first response, OpenAI has argued that the case should be dismissed because regulating AI is fundamentally a “political question” that should be addressed by Congress and the president. I, for one, am not comfortable leaving such important decisions to this president or this Congress—especially when they have done nothing to regulate AI to date.

    Hawaii faces distinct risks from unregulated AI deployment. Recent analyses indicate that a substantial portion of Hawaii’s professional services jobs could face significant disruption within five to seven years as a consequence of AI. Our isolated geography and limited economic diversification make workforce adaptation particularly challenging.

    Our unique cultural knowledge, practices, and language risk misappropriation and misrepresentation by AI systems trained without appropriate permission or context.

    My federal lawsuit applies well-established legal principles to this novel technology and makes four key claims:

    Product liability claims: OpenAI’s AI systems represent defectively designed products that fail to perform as safely as ordinary consumers would expect, particularly given the company’s deliberate removal of safety measures it previously deemed essential.

    Failure to warn: OpenAI has failed to provide adequate warnings about the known risks of its AI systems, including their potential for generating harmful misinformation and exhibiting deceptive behaviors.

    Negligent design: OpenAI has breached its duty of care by prioritizing commercial interests over safety considerations, as evidenced by internal documents and public statements from former safety researchers.

    Public nuisance: OpenAI’s deployment of increasingly capable AI systems without adequate safety measures creates an unreasonable interference with public rights in Hawaii.

    Federal courts have recognized the viability of such claims in addressing technological harms with broad societal impacts. Recent precedents from the Ninth Circuit Court of Appeals (which Hawaii is part of) establish that technology companies can be held liable for design defects that create foreseeable risks of harm.

    I’m not asking for a permanent ban on OpenAI or its products here in Hawaii but, rather, a pause until OpenAI implements the safety measures the company itself has said are needed, including reinstating its previous commitment to allocate 20 percent of resources to alignment and safety research; implementing the safety framework outlined in its own publication “Planning for AGI and Beyond,” which attempts to create guardrails for dealing with AI as or more intelligent than its human creators; restoring meaningful oversight through governance reforms; creating specific safeguards against misuse for manipulation of democratic processes; and developing protocols to protect Hawaii’s unique cultural and natural resources.

    These items simply require the company to adhere to safety standards it has publicly endorsed but has failed to consistently implement.

    While my lawsuit focuses on Hawaii, the implications extend far beyond our shores. The federal court system provides an appropriate venue for addressing these interstate commerce issues while protecting local interests.

    The development of increasingly capable AI systems is likely to be one of the most significant technological transformations in human history, many experts believe—perhaps in a league with fire, according to Google CEO Sundar Pichai. “AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire,” Pichai said in 2018.

    He’s right, of course. The decisions we make today will profoundly shape the world our children and grandchildren inherit. I believe we have a moral and legal obligation to proceed with appropriate caution and to ensure that potentially transformative technologies are developed and deployed with adequate safety measures.

    What is happening now with OpenAI’s breakneck AI development and deployment to the public is, to echo technologist Tristan Harris’s succinct April 2025 summary, “insane.” My lawsuit aims to restore just a little bit of sanity.

    This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

    ChatGPT Creator OpenAI Suing
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article19 Best Things to Do in Dublin, From Bookstores to Breweries
    Next Article Dolphins’ Tyreek Hill reveals who his son’s favorite player is, and it’s not him
    Emma Reynolds
    • Website

    Emma Reynolds is a senior journalist at Mirror Brief, covering world affairs, politics, and cultural trends for over eight years. She is passionate about unbiased reporting and delivering in-depth stories that matter.

    Related Posts

    Science

    Google develops AI tool that fills missing words in Roman inscriptions | Science

    July 23, 2025
    Science

    Australia’s coral reefs bleached by marine heatwave

    July 23, 2025
    Science

    Study Finds COVID Pandemic Accelerated Brain Aging in Everyone

    July 23, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Medium Rectangle Ad
    Top Posts

    Eric Trump opens door to political dynasty

    June 27, 20257 Views

    Anatomy of a Comedy Cliché

    July 1, 20253 Views

    SpaceX crane collapse in Texas being investigated by OSHA

    June 27, 20252 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Technology

    Meta Wins Blockbuster AI Copyright Case—but There’s a Catch

    Emma ReynoldsJune 25, 2025
    Business

    No phone signal on your train? There may be a fix

    Emma ReynoldsJune 25, 2025
    World

    US sanctions Mexican banks, alleging connections to cartel money laundering | Crime News

    Emma ReynoldsJune 25, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Medium Rectangle Ad
    Most Popular

    Eric Trump opens door to political dynasty

    June 27, 20257 Views

    Anatomy of a Comedy Cliché

    July 1, 20253 Views

    SpaceX crane collapse in Texas being investigated by OSHA

    June 27, 20252 Views
    Our Picks

    GoPro, Krispy Kreme join the meme party as Wall Street speculation ramps up

    July 23, 2025

    Texas Instruments stock falls 12% as CEO warns of tariff concerns

    July 23, 2025

    Bandmates and fans pay tribute to Black Sabbath singer

    July 23, 2025
    Recent Posts
    • GoPro, Krispy Kreme join the meme party as Wall Street speculation ramps up
    • Texas Instruments stock falls 12% as CEO warns of tariff concerns
    • Bandmates and fans pay tribute to Black Sabbath singer
    • London Fashion Week unveils first schedule under new strategy
    • 8-4 teams in the CFP? Big Ten commissioner Tony Petitti explains vision for college football postseason
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions
    © 2025 Mirror Brief. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.