Table of Contents
On May 16, 2023, Sam Altman, founder and CEO of OpenAI (the company that created the ChatGPT chatbot), testified before a U.S. Senate committee about the pros and cons of this new technology.
Increase of Misinformation
Mr. Altman has acted as a representative for the developing AI industry, and has never had a problem with confronting the ethical questions raised by AI. He has also pushed for more regulation. He said that AI could potentially be as big as “the printing press,” but also acknowledged that it has many potential dangers. ChatGPT and other AI text generation programs can answer questions in an extremely human-like way – but their responses can be very inaccurate. This can lead to an entirely new source of misinformation (often unintentional).
He said, “I think if this technology goes wrong, it can go quite wrong … we want to be vocal about that. We want to work with the government to prevent that from happening.”
Impact on Democracy
He told lawmakers that he was concerned about the potential impact AI could have on democracy, and how AI might be used to disseminate targeted misinformation during election campaigns. As well as inaccuracies in facts by generative AIs, they can also be used to craft intentional and persuasive misinformation. He said that this use of AI is one of his greatest concerns – “We’re going to face an election next year [2024], and these models are getting better.”
Job Obsolescence
He also talked about the potential impact of AI on the economy – including the probability that AI could replace jobs. This might lead to layoffs in certain industries. Mr. Altman said, “There will be an impact on jobs. We try to be very clear about that.” He then added that the U.S. government will “need to figure out how we want to mitigate that.”
Concrete Suggestions
He had concrete suggestions about ways in which the U.S. could regulate AI using a new government agency. These included “a combination of licencing and testing requirements” for all AI companies. Mr. Altman felt that this system could be used to limit the “development and release of AI models above a threshold of capabilities.” He also felt that companies such as his own (OpenAI) need to be independently audited.
He had this warning for the U.S. government: “We need to maximise the good over the bad. Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment.”
Conclusion
The take-away from this committee hearing was that there appears to be bi-partisan support for a new U.S. Government agency to regulate the AI sector. However, AI technology is evolving so rapidly that the lawmakers wondered whether such an agency would be able to keep up.
None of this addresses AI beyond the borders of the U.S., so it’s hard to see how this will have global impact. This is something the really needs to be addressed internationally. Either many countries will need to develop their own regulations and controls, or there need to be international regulatory treaties that multiple nation can sign on to (similar to the way the bulk of copyright law works) – although international regulatory treaties can be cumbersome, and seldom come into existence quickly. Here, unfortunately, time is of the essence.