OpenAI showing a ‘very dangerous mentality’ regarding safety, expert warns

An AI expert has accused OpenAI of rewriting its history and being overly dismissive of safety concerns.

Former OpenAI policy researcher Miles Brundage criticized the company’s recent safety and alignment document published this week. The document describes OpenAI as striving for artificial general intelligence (AGI) in many small steps, rather than making “one giant leap,” saying that the process of iterative deployment will allow it to catch safety issues and examine the potential for misuse of AI at each stage.

Recommended Videos

Among the many criticisms of AI technology like ChatGPT, experts are concerned that chatbots will give inaccurate information regarding health and safety (like the infamous issue with Google’s AI search feature which instructed people to eat rocks) and that they could be used for political manipulation, misinformation, and scams. OpenAI in particular has attracted criticism for lack of transparency in how it develops its AI models, which can contain sensitive personal data.

The release of the OpenAI document this week seems to be a response to these concerns, and the document implies that the development of the previous GPT-2 model was “discontinuous” and that it was not initially released due to “concerns about malicious applications⁠,” but now the company will be moving toward a principle of iterative development instead. But Brundage contends that the document is altering the narrative and is not an accurate depiction of the history of AI development at OpenAI.

“OpenAI’s release of GPT-2, which I was involved in, was 100% consistent + foreshadowed OpenAI’s current philosophy of iterative deployment,” Brundage wrote on X. “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.”

Brundage also criticized the company’s apparent approach to risk based on this document, writing that,  “It feels as if there is a burden of proof being set up in this section where concerns are alarmist + you need overwhelming evidence of imminent dangers to act on them – otherwise, just keep shipping. That is a very dangerous mentality for advanced AI systems.”

This comes at a time when OpenAI is under increasing scrutiny with accusations that it prioritizes “shiny products” over safety.

Comments on "OpenAI showing a ‘very dangerous mentality’ regarding safety, expert warns" :

Leave a Reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

OpenAI lifts the lid on ChatGPT’s daily prompt count — and it’s big
COMPUTING

OpenAI lifts the lid on ChatGPT’s daily prompt count — and it’s big

Since its debut in November 2022, OpenAI’s ChatGPT has rapidly become one of the fastest-growing c...

Read More →
ChatGPT: everything you need to know about the AI chatbot
COMPUTING

ChatGPT: everything you need to know about the AI chatbot

Artificial Intelligence, otherwise known as AI, has been dominating the news for the past few years,...

Read More →
Key ChatGPT and Gemini features compared. Who did it better?
COMPUTING

Key ChatGPT and Gemini features compared. Who did it better?

The AI industry has blossomed quickly in recent years, and several companies have been in steep comp...

Read More →
Microsoft Copilot Vision turns your phone camera into an interactive visual search tool
COMPUTING

Microsoft Copilot Vision turns your phone camera into an interactive visual search tool

Late last year, Microsoft introduced a new AI feature called Copilot Vision for the web, and now it�...

Read More →
OpenAI’s latest model creates life like images and readable text, try it free
COMPUTING

OpenAI’s latest model creates life like images and readable text, try it free

OpenAI has introduced its 4o model into ChatGPT to enable native image generation within the chatbot...

Read More →
Amazon’s AI shopper makes sure you don’t leave without spending
COMPUTING

Amazon’s AI shopper makes sure you don’t leave without spending

The future of online shopping on Amazon is going to be heavily dependent on AI. Early in 2025, the c...

Read More →
Google Gemini can now tap into your search history
COMPUTING

Google Gemini can now tap into your search history

Google has announced a wide range of upgrades for its Gemini assistant today. To start, the new Gemi...

Read More →
I saw Google’s Gemini AI erase copyright evidence. I am deeply worried
COMPUTING

I saw Google’s Gemini AI erase copyright evidence. I am deeply worried

Update: Google has responded to Digital Trends’ queries. The story has been updated with company�...

Read More →
Anthropic Claude: How to use the impressive ChatGPT rival
COMPUTING

Anthropic Claude: How to use the impressive ChatGPT rival

AnthropicThough it may not capture as many headlines as its rivals from Google, Microsoft, and OpenA...

Read More →