It is a good idea to regulate artificial intelligence (AI) programs like ChatGPT, the chief executive officer of the popular chatbot’s developer told lawmakers Tuesday.
Companies creating powerful AI should be required to follow safety mandates that include internal and independent testing before being released, said Sam Altman, CEO of OpenAI, before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Lawmakers held the hearing to explore what rules and guidance might be needed to help regulate chatbots like the one OpenAI created.
ChatGPT can answer questions on thousands of topics and help write prose, computer code, school papers, and even legal documents. It has raised concerns worldwide for its potential to be abused and is now more accessible after launching on Apple iOS on Thursday.
Chatbots like ChatGPT have been under scrutiny by authorities in the United States, European Union, and Canada. ChatGPT is banned in China, Iran, North Korea, and Russia; the chatbot returned to Italy last month after resolving privacy concerns raised by the country’s data protection authority.
Altman said he would prefer stakeholders have input into any initial and ongoing rulemaking.
“It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard-setting,” he said in his testimony.
Altman said any governance regime should be flexible enough so it can be applied to technical developments because the AI field is rapidly developing.
Sen. Richard Blumenthal (D-Conn.) demonstrated ChatGPT’s potential by opening the hearing with a statement drafted by the chatbot and spoken by clones of his voice. Blumenthal then expressed concern about the chatbot being used to create a deep fake of his voice praising Russian President Vladimir Putin.
Altman said his “worst fears” about AI is it could “cause significant harm to the world.”
“I think if this technology goes wrong, it can go quite wrong,” he said. “And we want to be vocal about that. We want to work with the government to prevent that from happening. But we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.”
Elsewhere Tuesday, Gary Gensler, chair of the Securities and Exchange Commission, expressed his concern generative AI systems like ChatGPT could cause the next financial crisis if not properly utilized.
“You don’t have to understand the math, but [you have] to understand, really, how the risk management is managed,” said Gensler at a conference hosted by the Financial Industry Regulatory Authority, according to the Wall Street Journal.