OpenAI showing a ‘very dangerous mentality’ regarding safety, expert warns

An AI expert has accused OpenAI of rewriting its history and being overly dismissive of safety concerns.

Former OpenAI policy researcher Miles Brundage criticized the company’s recent safety and alignment document published this week. The document describes OpenAI as striving for artificial general intelligence (AGI) in many small steps, rather than making “one giant leap,” saying that the process of iterative deployment will allow it to catch safety issues and examine the potential for misuse of AI at each stage.

Recommended Videos

Among the many criticisms of AI technology like ChatGPT, experts are concerned that chatbots will give inaccurate information regarding health and safety (like the infamous issue with Google’s AI search feature which instructed people to eat rocks) and that they could be used for political manipulation, misinformation, and scams. OpenAI in particular has attracted criticism for lack of transparency in how it develops its AI models, which can contain sensitive personal data.

The release of the OpenAI document this week seems to be a response to these concerns, and the document implies that the development of the previous GPT-2 model was “discontinuous” and that it was not initially released due to “concerns about malicious applications⁠,” but now the company will be moving toward a principle of iterative development instead. But Brundage contends that the document is altering the narrative and is not an accurate depiction of the history of AI development at OpenAI.

“OpenAI’s release of GPT-2, which I was involved in, was 100% consistent + foreshadowed OpenAI’s current philosophy of iterative deployment,” Brundage wrote on X. “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.”

Brundage also criticized the company’s apparent approach to risk based on this document, writing that,  “It feels as if there is a burden of proof being set up in this section where concerns are alarmist + you need overwhelming evidence of imminent dangers to act on them – otherwise, just keep shipping. That is a very dangerous mentality for advanced AI systems.”

This comes at a time when OpenAI is under increasing scrutiny with accusations that it prioritizes “shiny products” over safety.

Comments on "OpenAI showing a ‘very dangerous mentality’ regarding safety, expert warns" :

Leave a Reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

This quirky AI-powered camera prints poems, not photos
COMPUTING

This quirky AI-powered camera prints poems, not photos

The Poetry Camera is an ingenious device that doesn’t take photos but instead makes poems.The clev...

Read More →
YouTube’s AI Overviews want to make search results smarter
COMPUTING

YouTube’s AI Overviews want to make search results smarter

YouTube is experimenting with a new AI feature that could change how people find videos. Here’s th...

Read More →
Microsoft Copilot Vision turns your phone camera into an interactive visual search tool
COMPUTING

Microsoft Copilot Vision turns your phone camera into an interactive visual search tool

Late last year, Microsoft introduced a new AI feature called Copilot Vision for the web, and now it�...

Read More →
Microsoft considers developing AI models to better control Copilot features
COMPUTING

Microsoft considers developing AI models to better control Copilot features

Microsoft may be on its way to developing AI models independent of its partnership with OpenAI. Over...

Read More →
ChatGPT’s Advanced Voice Mode now has a ‘better personality’
COMPUTING

ChatGPT’s Advanced Voice Mode now has a ‘better personality’

If you find that ChatGPT’s Advanced Voice Mode is a little too keen to jump in when you’re engag...

Read More →
The latest Windows 11 build has a surprising bug — it gets rid of Copilot
COMPUTING

The latest Windows 11 build has a surprising bug — it gets rid of Copilot

Microsoft has updated the support page for the Windows 11 build it released last week to reveal a ra...

Read More →
Chromebooks are about to get a lot smarter, and more accessible
COMPUTING

Chromebooks are about to get a lot smarter, and more accessible

Google recently announced that Gemini will soon replace Google Assistant everywhere, from your phone...

Read More →
Spotify now offers its listeners AI-narrated audiobooks
COMPUTING

Spotify now offers its listeners AI-narrated audiobooks

Derek Malcolm / Digital TrendsIn a move expected to dramatically increase the quantity of available ...

Read More →
Opera’s Operator will save you the clicks and browse the web for you
COMPUTING

Opera’s Operator will save you the clicks and browse the web for you

Mobile World Congress Read our complete coverage of Mobile Worl...

Read More →