ChatGPT’s OpenAI faces probe over consumer protection laws

2 minutes, 22 seconds Read


ChatGPT maker OpenAI is facing an investigation by the Federal Trade Commission (FTC) over possible violations of consumer protection laws.

The action marks the toughest scrutiny of Microsoft-backed OpenAI since it burst onto the scene in November with its AI-powered ChatGPT chatbot.

The investigation is set to focus on whether OpenAI’s chatbot has caused harm to consumers through its collection of data and the tool’s occasional output of false information on individuals. Tools like ChatGPT are also known to sometimes spit out erroneous data, a phenomenon known as “hallucinating.” This can have serious consequences, especially if the information is detrimental to a person’s reputation.

The agency will also look into how OpenAI trains it AI tools and handles personal data.

The FTC kicked off its work this week by sending OpenAI a 20-page investigative demand that was shared online by the Washington Post on Thursday.

It calls on OpenAI to hand over information regarding any complaints it’s received from the public and details of any lawsuits. It also wants information on the data leak OpenAI disclosed earlier this year involving the temporary exposure of users’ chat history and payment data.

The new wave of generative-AI tools has garnered much attention in recent months due to their impressive talents across a range of tasks. ChatGPT and others such as Google’s Bard are text-based tools that have the ability to converse in a very natural, human-like way.

The technology is so powerful that it will be able to perform many tasks in a smarter and more efficient manner than humans. In health, for example, the technology will be able to assist by “automating tedious and error-prone operational work, bringing years of clinical data to a clinician’s fingertips in seconds, and by modernizing health systems infrastructure,” according to consulting firm McKinsey.

But AI in the workplace will also lead to the loss of countless jobs and big societal upheavals. The technology is being used by bad actors, too, to generate and spread misinformation or to create more persuasive scams. At the extreme end, some experts fear that without responsible development and effective regulation of AI, more advanced versions of the technology could even turn hostile and challenge the human race.

Developments in the sector have been so rapid in recent months that governments around the world are having to play catch-up when it comes to regulation. Much consideration has to be given to getting the balance right, as too much regulation will stifle development of a tool that could bring great benefits to society, while too little could increase the risk of the technology causing harm.

For now, the FTC is focusing on OpenAI’s operations to see if it’s violated consumer protection laws.

For all the latest news on ChatGPT, Digital Trends has you covered.

Editors’ Recommendations




Takeup Pakistan takes pride in reporting 100% Legit and Verified News.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *