ChatGPT account login credentials of users were compromisedl, OpenAI confirms

ChatGPT account login credentials of users were compromisedl, OpenAI confirms

OpenAI has officially acknowledged that the login credentials of ChatGPT users were compromised, leading to unauthorized access and misuse of accounts. This confirmation comes in response to recent claims made by ArsTechnica, where screenshots sent in by a reader indicated that ChatGPT was leaking private conversations, including sensitive details such as usernames and passwords, something that the company completely denied.

OpenAI clarified that their fraud and security teams were actively investigating the matter and refuted the initial ArsTechnica report as inaccurate. According to OpenAI, the compromised account login credentials allowed a malicious actor to gain access and misuse the affected accounts. The leaked chat history and files were a result of this unauthorized access, and it was not a case of ChatGPT displaying another user's history.

The affected user, whose account was reportedly compromised, did not believe their account had been accessed. OpenAI emphasized that the ongoing investigation would provide more insights into the extent of the breach and the necessary steps to address the security issue.

ArsTechnica had originally reported that ChatGPT was displaying private conversations, raising concerns about the exposure of sensitive information. The affected user had discovered additional conversations in their chat history that did not belong to them after using ChatGPT for an unrelated query.

The leaked conversations included details from a support system used by employees of a pharmacy prescription drug portal, exposing information about troubleshooting issues, the app's name, store numbers, and additional login credentials. Another leaked conversation revealed details about a presentation and an unpublished research proposal.

This incident adds to a series of past security concerns related to ChatGPT. In March 2023, a bug reportedly led to the leakage of chat titles, and in November 2023, researchers were able to extract private data used in training the Language Model by manipulating queries.

Therefore, it is advised to users to exercise caution when using AI bots like ChatGPT, especially with bots created by third parties. The absence of standard security features such as two-factor authentication (2FA) or the ability to review recent logins on the ChatGPT site has also been highlighted, raising concerns about the platform's security measures.

Comments

Popular posts from this blog

How much water should I drink before and during exercise?

JJK 249 spoilers: Will Gojo return? Here's what to expect from upcoming chapter

How to make Bread