Like Photoshop ‘on steroids’: ChatGPT founder urges further regulation on ‘exploding’ AI use

Innovation

OpenAI CEO Sam Altman and other leaders of the burgeoning AI industry voiced support for additional regulation while testifying at a historic congressional hearing Tuesday—in contrast to some powerful tech firms that have pushed back against regulatory intervention.
Sam Altman. Image: Getty
Key Takeaways
  • Altman identified AI as a force of potential danger when arguing in favour of additional government regulation, noting advancements in labor, health care and the economy that AI could support—adding that regulatory intervention by governments would be “critical” to prevent and mitigate negative impacts of AI.
  • IBM Chief Privacy & Trust Officer Christina Montgomery and New York University Professor Emeritus Gary Marcus were witnesses alongside Altman, with Marcus providing some of the hearing’s most stark warnings on topics like political manipulation, health misinformation and hyper-targeted advertising.
  • Marcus said that he would like to see a Cabinet-level organisation made in order to keep up with AI development, adding that oversight could be done in the form of safety reviews similar to those conducted by the Food and Drug Administration.
  • Montgomery said oversight of AI should produce different rules for different risks, with the strongest regulation applied to specific use cases with the greatest risk to society.
What To Watch For

Altman called for the creation of a new federal agency specifically tasked with issuing licenses for AI technology—licenses he said should be revoked if companies fail to comply with safety standards.

Crucial Quote

Altman responded to an early remark made by Sen. Josh Hawley (R-Mo.) who asked whether the development of AI would be akin to the advent of the printing press or the “atom bomb,” to which Altman said, “We think it can be a printing press moment.”

Surprising Fact

Sen. Dick Durbin (D-Ill.) called the requests from industry leaders for regulation “historic,” adding, “I can’t recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them.”

Related

Key Background

Altman co-founded OpenAI in 2015 alongside notable names in tech such as Tesla CEO Elon Musk, Paypal and Palantir Technologies founder Peter Thiel and LinkedIn co-founder Reid Hoffman.

The company started as a nonprofit and transitioned to a “capped” for-profit company in 2019 that limited profits in excess of a 100x return. OpenAI has since released GPT-4, a language model that can generate text in response to inputs from users, and DALL-E, a deep learning model capable of generating original images.

OpenAI’s ChatGPT was estimated to have reached more than 100 million monthly active users in January—a feat managed just two months after launch that made it the fastest-growing consumer application in history, according to UBS. Companies like Google have also entered the chatbot race with the creation of Bard, a conversational AI that has been pegged as a ChatGPT competitor, which it also plans to incorporate into its signature search engine.

AI technology has received considerable scrutiny from government officials and scientists alike, who have cited concerns about privacy, loss of jobs and the potential impact on elections. Aleksander Mądry, the director of the MIT Center for Deployable Machine Learning, noted in an interview with Forbes that there may be a limited number of laws that apply to AI now but that explicit AI legislation is lacking, making hearings like the one held Tuesday important for future guidelines.

Contra

Tech companies such as Apple, Amazon and Meta have actively fought against regulatory intervention. The tech industry spent more than $100 million on advertising designed to fight antitrust measures and other bills in Congress from the beginning of 2021 to the end of 2022, according to The Wall Street Journal.

Notably, OpenAI is not the first company to deliver a pro-regulation argument to Congress. In 2020, when Meta was known as Facebook, CEO Mark Zuckerberg called for an updated, more accountable version of Section 230—a section of an internet law which grants immunity to online platforms for civil liabilities based on third-party content.

This story was first published on forbes.com and all figures are in USD.


Forbes Australia issue no.4 is out now. Tap here to secure your copy or become a member here.

Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here.

More from Forbes Australia