An AI expert has accused OpenAI of rewriting its history and being overly dismissive of safety concerns.
Former OpenAI policy researcher Miles Brundage criticized the company’s recent safety and alignment document published this week. The document describes OpenAI as striving for artificial general intelligence (AGI) in many small steps, rather than making “one giant leap,” saying that the process of iterative deployment will allow it to catch safety issues and examine the potential for misuse of AI at each stage.
Among the many criticisms of AI technology like ChatGPT, experts are concerned that chatbots will give inaccurate information regarding health and safety (like the infamous issue with Google’s AI search feature which instructed people to eat rocks) and that they could be used for political manipulation, misinformation, and scams. OpenAI in particular has attracted criticism for lack of transparency in how it develops its AI models, which can contain sensitive personal data.
The release of the OpenAI document this week seems to be a response to these concerns, and the document implies that the development of the previous GPT-2 model was “discontinuous” and that it was not initially released due to “concerns about malicious applications,” but now the company will be moving toward a principle of iterative development instead. But Brundage contends that the document is altering the narrative and is not an accurate depiction of the history of AI development at OpenAI.
“OpenAI’s release of GPT-2, which I was involved in, was 100% consistent + foreshadowed OpenAI’s current philosophy of iterative deployment,” Brundage wrote on X. “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.”
Brundage also criticized the company’s apparent approach to risk based on this document, writing that, “It feels as if there is a burden of proof being set up in this section where concerns are alarmist + you need overwhelming evidence of imminent dangers to act on them – otherwise, just keep shipping. That is a very dangerous mentality for advanced AI systems.”
This comes at a time when OpenAI is under increasing scrutiny with accusations that it prioritizes “shiny products” over safety.
Since its debut in November 2022, OpenAI’s ChatGPT has rapidly become one of the fastest-growing c...
Read More →
Artificial Intelligence, otherwise known as AI, has been dominating the news for the past few years,...
Read More →
The AI industry has blossomed quickly in recent years, and several companies have been in steep comp...
Read More →
Late last year, Microsoft introduced a new AI feature called Copilot Vision for the web, and now it�...
Read More →
OpenAI has introduced its 4o model into ChatGPT to enable native image generation within the chatbot...
Read More →
The future of online shopping on Amazon is going to be heavily dependent on AI. Early in 2025, the c...
Read More →
Google has announced a wide range of upgrades for its Gemini assistant today. To start, the new Gemi...
Read More →Update: Google has responded to Digital Trends’ queries. The story has been updated with company�...
Read More →
AnthropicThough it may not capture as many headlines as its rivals from Google, Microsoft, and OpenA...
Read More →
Comments on "OpenAI showing a ‘very dangerous mentality’ regarding safety, expert warns" :