ChatGPT maker under investigation by US regulators over AI risks

Receive free AI updates

The risks posed by artificially intelligent chatbots are being officially investigated by US regulators for the first time after the Federal Trade Commission launched a wide-ranging investigation into ChatGPT maker OpenAI.

In a letter sent to the Microsoft-backed company, the FTC said it would look into whether people have been harmed by AI chatbots creating false information about them, as well as whether OpenAI has engaged in privacy practices and unfair or deceptive data security. .

Generative AI products are in the crosshairs of regulators around the world as AI experts and ethicists sound the alarm over the massive amount of personal data consumed by the technology, as well as its results potentially harmful, ranging from misinformation to sexist and racist comments.

In May, the FTC issued a warning shot at the industry, saying it was focusing intensely on how companies might choose to use AI technology, including new generative AI tools, in ways that can have a real and substantial impact. on consumers.

In its letter, the US regulator asked OpenAI to share internal material ranging from how the group maintains user information to measures the company has taken to address the risk that its model produces false, misleading or disparaging.

The FTC declined to comment on the letter, which was first reported by the Washington Post. Writing on Twitter later Thursday, OpenAI CEO Sam Altman called very disappointing to see that the FTC’s request starts with a leak and doesn’t help build trust. He added: It is extremely important to us that our technology is safe and pro-consumer and we are confident that we are complying with the law. Obviously we will work with the FTC.

Lina Khan, chair of the FTC, testified before the House Judiciary Committee Thursday morning and faced strong criticism from Republican lawmakers for her tough enforcement stance.

Asked about the investigation at the hearing, Khan declined to comment on the investigation, but said regulators’ wider concerns were with ChatGPT and other AI services receiving massive amounts of data while it wasn’t there. controls on the type of data that was entered into these companies.

He added: We’ve heard of reports where people’s sensitive information comes up in response to a request from someone else. We’ve heard of libel, defamatory statements, blatantly untrue things coming out. This is the kind of fraud and deception they were worried about.

Khan has also been peppered with questions from lawmakers about her mixed record in court, after the FTC suffered a big defeat this week in its attempt to block Microsoft’s $75 billion acquisition of Activision Blizzard. The FTC appealed the decision on Thursday.

Meanwhile, Republican Jim Jordan, chairman of the committee, accused Khan of harassing Twitter after the company alleged in a court filing that the FTC had engaged in erratic and improper behavior in implementing an enforced consent order last year.

Khan didn’t comment on Twitter’s filing, but said all the FTC cares about is that the company is following the law.

Experts have been concerned about the sheer volume of data retrieved from the language models behind ChatGPT. OpenAI had more than 100 million monthly active users within two months of its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was used by more than 1 million people in 169 countries within two weeks of its release in January.

Users have reported that ChatGPT fabricated names, dates and facts, as well as fake links to news websites and references to academic papers, a problem known in the industry as hallucinations.

The FTC investigation delves into the technical details of how ChatGPT was designed, including the company’s work to correct hallucinations and the oversight of its human reviewers, which directly affect consumers. She also asked about consumer complaints and the company’s efforts to gauge consumer understanding of chatbot accuracy and trustworthiness.

In March, Italy’s privacy regulator temporarily banned ChatGPT as it looked into, among other matters, the US company’s collection of personal information following a cybersecurity breach. It was brought back a few weeks later after OpenAI made its privacy policy more accessible and introduced a tool to verify the age of users.

Echoing previous admissions about ChatGPT’s fallibility, Altman tweeted: Let’s be transparent about the limitations of our technology, especially when we fail. And our capped-profit structure means we don’t have an incentive for unlimited returns. However, he said the chatbot was built on years of security research, adding: We protect user privacy and design our systems to learn about the world, not individuals.


#ChatGPT #maker #investigation #regulators #risks
Image Source : www.ft.com

Leave a Comment