ChatGPT Chief Sam Altman Says Artificial Intelligence Should Be Regulated

The head of the artificial intelligence company that created ChatGPT told Congress on Tuesday that government intervention would be crucial to mitigating the risks of increasingly powerful AI systems.

“As this technology progresses, we understand that people are concerned about how it could change the way we live. We are too,” OpenAI CEO Sam Altman said at the Senate hearing.

Altman proposed the creation of a US or global agency that would license the most powerful AI systems and have the authority to “take back that license and ensure compliance with security standards”.

His San Francisco-based startup gained attention after releasing ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with human-like responses.

What began as a panic among teachers about ChatGPT’s use of cheating in homework assignments has turned into a “generic AI” to mislead people, spread lies, violate copyright protections, and eliminate some jobs. Widespread concerns have spread about the potential of the latest crop of tools.

And while there is no immediate indication that Congress will enact new AI regulations, as European lawmakers are doing, social concerns brought Altman and other tech CEOs to the White House earlier this month to crack down on harmful AI products. US agencies to promise. Which violate existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, began the hearing with a recorded speech that sounded like the senator’s, but was actually Blumenthal’s floor speeches. There was a voice clone trained on and reading a text. Speech written by ChatGPT when he asked the chatbot to write his opening remarks.

The result was impressive, Blumenthal said, but he added, “What if I had asked for it, and what if it had been granted, Ukraine’s surrender or support of (Russian President) Vladimir Putin’s leadership?”

Blumenthal said AI companies should be required to test their systems and disclose known risks before releasing them, and expressed particular concern about how AI systems could destabilize the job market in the future. .

Pressed on his own worst fears about AI, Altman mostly avoided specifics, except that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it’s going to go horribly wrong.” Might be possible.”

But he later proposed that a new regulatory agency should implement safeguards that would block AI models that could “self-replicate and self-outgrow in the wild” — a vision of the future about advanced AI systems. Hinting at concerns that humans may manipulate the controls.

Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk, OpenAI has grown from a nonprofit research lab into a business with a security-focused mission. Its other popular AI products including the image maker DALL-E.

Microsoft has invested billions of dollars in startups and has integrated their technology into its products, including its search engine Bing.

Altman plans to travel around the world this month to national capitals and major cities on six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of US lawmakers, many of whom told CNBC they were moved by his comments.

Also included were Christina Montgomery, IBM’s chief privacy and trust officer, and Gary Marcus, a professor emeritus at New York University, who were among a group of AI experts who encouraged OpenAI and other tech firms to develop their more powerful AI models. called to stop. Six months to give society more time to consider the risks.

The letter was a response to the March release of OpenAI’s latest model, GPT-4, which is said to be more powerful than ChatGPT.

Sen. Josh Hawley of Missouri, the panel’s ranking Republican, said the technology has huge implications for elections, jobs and national security. He said Tuesday’s hearing was “an important first step toward understanding what Congress should be doing.”

Several tech industry leaders have said they welcome some form of AI oversight, but have cautioned against what they see as overly onerous regulations.

Both Altman and Marcus called for an AI-focused regulator, preferably an international one, with Altman citing the precedent of the United Nations’ nuclear agency and Marcus comparing it to the US Food and Drug Administration. But IBM’s Montgomery called on Congress to take a “precise regulation” approach instead.

“We think AI should be regulated essentially to the point of risk,” Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.

(This story has not been edited by News18 staff and is published from a syndicated news agency feed)