OpenAI cracks down on ChatGPT scammers

OpenAI has made it clear that its flagship AI service, ChatGPT is not intended for malicious use.

The company has released a report detailing that it has observed the trends of bad actors using its platform as it becomes more popular. OpenAI indicated it has removed dozens of accounts on the suspicion of using ChatGPT in unauthorized ways, such as for “debugging code to generating content for publication on various distribution platforms.”

Recommended Videos

The company has also recently announced reaching a 400 million weekly active user milestone. The company detailed that its usership has increased by more than 100 million in less than three months as more enterprises and developers utilize its tools. However, ChatGPT is also a free service that can be accessed globally. As the moral and ethical aspects of its functions have long been in question, OpenAI has had to come to terms with the fact that there are entities that have ulterior motives for the platform.

“OpenAI’s policies strictly prohibit use of output from our tools for fraud or scams. Through our investigation into deceptive employment schemes, we identified and banned dozens of accounts,” the company said in its report.

In its report, OpenAI discussed having to challenge nefarious actions taking place on ChatGPT. The company highlighted several case studies, where it has uncovered and taken action by banning the accounts found to be using the tool for malicious intent.

In one instance, OpenAI detailed an account that wrote disparaging news articles about the US, with the news source being published in Latin America under the guise of a Chinese publication byline.
Another case, localized in North Korea was found to be to be generating resumes and job profiles for make-believe job applicants. According to OpenAI, the account may have been used for applying to jobs at Western companies.

Yet another study uncovered accounts believed to have originated in Cambodia that used ChatGPT for translation and to generate comments in networks of “romance scammers,” that infiltrate several social media platforms, including X, Facebook, and Instagram.

OpenAI has confirmed that it has shared its findings with its industry contemporaries, such as Meta, that might inadvertently be affected by the actions happening on ChatGPT.

An ongoing issue

This is not the first time OpenAI has detailed its efforts in challenging bad actors on its AI platform. In October 2024, the company released a report highlighting 20 cyberattacks it impeded, including events led by Iranian and Chinese state-sponsored hackers.

Cybersecurity experts have also long observed bad actors using ChatGPT for nefarious purposes, such as developing malware and other malicious code. These findings have been around since early 2023, when the tool was still fresh to the market. This is when OpenAI was first considering introducing a subscription tier to support its high demand.

Such nefarious tasks entailed bad actors using the company’s API to create ChatGPT alternatives that can generate malware. However, white hat experts have also studied AI-generated malware from a research perspective, discovering loopholes that allow the chatbot to generate nefarious code in smaller, less detectable, pieces.

IT and cybersecurity professionals were polled in February 2023 about the safety of ChatGPT, with many responding that they believed the tool would be responsible for a successful cyberattack within the year. By March 2023, the company had experienced its first data breach, which would become a regular occurrence.

Comments on "OpenAI cracks down on ChatGPT scammers" :

Leave a Reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

WeTransfer backlash highlights need for smarter AI practices
COMPUTING

WeTransfer backlash highlights need for smarter AI practices

A recent update to WeTransfer’s terms of service caused consternation after some of its customers ...

Read More →
Tired of monthly payments? ChatGPT could soon offer a lifetime subscription
COMPUTING

Tired of monthly payments? ChatGPT could soon offer a lifetime subscription

ChatGPT usage is more prevalent than ever, and its current model offers a monthly subscription of $2...

Read More →
Using ChatGPT too much can create emotional dependency, study finds
COMPUTING

Using ChatGPT too much can create emotional dependency, study finds

OpenAI seems to be announcing new AI models by the week to improve its ChatGPT chatbot for the bette...

Read More →
ChatGPT’s Advanced Voice Mode now has a ‘better personality’
COMPUTING

ChatGPT’s Advanced Voice Mode now has a ‘better personality’

If you find that ChatGPT’s Advanced Voice Mode is a little too keen to jump in when you’re engag...

Read More →
Microsoft will soon use AI to help you find your photos and files on Copilot+ PCs
COMPUTING

Microsoft will soon use AI to help you find your photos and files on Copilot+ PCs

In a Windows Insider blog post, Microsoft announced an AI upgrade to Windows Search to make finding ...

Read More →
Apple turns 49 today, but Apple Intelligence is spoiling the party
COMPUTING

Apple turns 49 today, but Apple Intelligence is spoiling the party

Today marks the 49th anniversary of Apple’s founding. The Mac and iPhone maker was created on Apri...

Read More →
Amazon’s AI shopper makes sure you don’t leave without spending
COMPUTING

Amazon’s AI shopper makes sure you don’t leave without spending

The future of online shopping on Amazon is going to be heavily dependent on AI. Early in 2025, the c...

Read More →
Microsoft is working on making it easier to talk to your PC
COMPUTING

Microsoft is working on making it easier to talk to your PC

Windows 11 has support for voice commands like “Open Edge” largely for accessibility purposes bu...

Read More →
Meta rolls out its AI chatbot to nearly a dozen Middle Eastern nations
COMPUTING

Meta rolls out its AI chatbot to nearly a dozen Middle Eastern nations

MetaMillions of Facebook, Instagram, WhatsApp, and Messenger users throughout the Middle East now en...

Read More →