A hacker said they purloined personal details from countless OpenAI accounts-but scientists are doubtful, and the business is investigating.
OpenAI says it's after a hacker claimed to have swiped login credentials for 20 countless the AI firm's user accounts-and put them up for sale on a dark web online forum.
The pseudonymous breacher posted a puzzling message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and offering prospective purchasers what they claimed was sample information containing email addresses and passwords. As reported by Gbhackers, the complete dataset was being used for sale "for simply a couple of dollars."
"I have more than 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."
If legitimate, this would be the 3rd significant security incident for the AI business considering that the release of ChatGPT to the general public. Last year, a hacker got access to the business's internal Slack messaging system. According to The New York Times, the hacker "stole details about the style of the company's A.I. technologies."
Before that, in 2023 an even simpler bug including jailbreaking prompts enabled hackers to obtain the private information of OpenAI's paying consumers.
This time, however, security scientists aren't even sure a hack occurred. Daily Dot press reporter Mikael Thalan composed on X that he found invalid email addresses in the supposed sample information: "No evidence (recommends) this supposed OpenAI breach is legitimate. A minimum of two addresses were void. The user's only other post on the online forum is for a stealer log. Thread has because been deleted also."
No evidence this alleged OpenAI breach is legitimate.
Contacted every email address from the purported sample of login credentials.
At least 2 addresses were invalid. The user's only other post on the online forum is for a thief log. Thread has actually considering that been deleted too. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025
OpenAI takes it 'seriously'
In a statement shared with Decrypt, drapia.org an OpenAI representative acknowledged the scenario while maintaining that the business's systems appeared safe and secure.
"We take these claims seriously," the spokesperson said, including: "We have actually not seen any evidence that this is linked to a compromise of OpenAI systems to date."
The scope of the alleged breach sparked concerns due to OpenAI's massive user base. Countless users worldwide count on the business's tools like ChatGPT for business operations, instructional functions, and material generation. A legitimate breach could expose private conversations, industrial jobs, and other delicate information.
Until there's a last report, some preventive measures are always recommended:
- Go to the "Configurations" tab, log out from all linked gadgets, and allow two-factor authentication or 2FA. This makes it essentially difficult for a hacker to gain access to the account, even if the login and passwords are jeopardized.
- If your bank supports it, then create a virtual card number to manage OpenAI subscriptions. In this manner, it is much easier to identify and prevent fraud.
- Always watch on the discussions kept in the chatbot's memory, and know any phishing attempts. OpenAI does not request for any individual details, and any payment update is constantly dealt with through the main OpenAI.com link.