ChatGPT's Popularity Worries US Lawmakers About Its Impact on National Security

ChatGPT was estimated to have reached 100 million monthly active users, making it the fastest-growing consumer application in history.

ChatGPT's Popularity Worries US Lawmakers About Its Impact on National Security

ChatGPT has already been banned in schools in New York and Seattle, according to media reports

Highlights
  • ChatGPT was created by OpenAI, a private company backed by Microsoft
  • Its ubiquity has generated fear about spread of disinformation
  • ChatGPT listed potential areas of focus for regulators
Advertisement

ChatGPT, a fast-growing artificial intelligence program, has drawn praise for its ability to write answers quickly to a wide range of queries, and attracted US lawmakers' attention with questions about its impact on national security and education. 

ChatGPT was estimated to have reached 100 million monthly active users just two months after launch, making it the fastest-growing consumer application in history, and a growing target for regulation. 

It was created by OpenAI, a private company backed by Microsoft, and made available to the public for free. Its ubiquity has generated fear that generative AI such as ChatGPT could be used to spread disinformation, while educators worry it will be used by students to cheat. 

Representative Ted Lieu, a Democrat on the House of Representatives Science Committee, said in a recent opinion piece in the New York Times that he was excited about AI and the "incredible ways it will continue to advance society," but also "freaked out by AI, specifically AI that is left unchecked and unregulated."

Lieu introduced a resolution written by ChatGPT that said Congress should focus on AI "to ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans, and that the benefits of AI are widely distributed and the risks are minimised." 

In January, OpenAI CEO Sam Altman went to Capitol Hill where he met with tech-oriented lawmakers such as Senators Mark Warner, Ron Wyden and Richard Blumenthal and Representative Jake Auchincloss, according to aides to the Democratic lawmakers.

An aide to Wyden said the lawmaker pressed Altman on the need to make sure AI did not include biases that would lead to discrimination in the real world, like housing or jobs.

"While Senator Wyden believes AI has tremendous potential to speed up innovation and research, he is laser-focused on ensuring automated systems don't automate discrimination in the process," said Keith Chu, an aide to Wyden. 

A second congressional aide described the discussions as focusing on the speed of changes in AI and how it could be used. 

Prompted by worries about plagiarism, ChatGPT has already been banned in schools in New York and Seattle, according to media reports. One congressional aide said the concern they were hearing from constituents came mainly from educators focused on cheating.

OpenAI said in a statement: "We don't want ChatGPT to be used for misleading purposes in schools or anywhere else, so we're already developing mitigations to help anyone identify text generated by that system."

In an interview with Time, Mira Murati, OpenAI's chief technology officer, said the company welcomed input, including from regulators and governments. "It's not too early (for regulators to get involved)," she said.

Andrew Burt, managing partner of BNH.AI, a law firm focused on AI liability, pointed to the national security concerns, adding that he has spoken with lawmakers who are studying whether to regulate ChatGPT and similar AI systems such as Google's Bard, though he said he could not disclose their names.

"The whole value proposition of these types of AI systems is that they can generate content at scales and speeds that humans simply can't," he said. 

"I would expect malicious actors, non-state actors and state actors that have interests that are adversarial to the United States to be using these systems to generate information that could be wrong or could be harmful." 

ChatGPT itself, when asked how it should be regulated, demurred and said: "As a neutral AI language model, I don't have a stance on specific laws that may or may not be enacted to regulate AI systems like me." But it then went on to list potential areas of focus for regulators, such as data privacy, bias and fairness, and transparency in how answers are written.

© Thomson Reuters 2023


Is 2023 the year when you should finally buy a foldable phone? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.

 

 
Affiliate links may be automatically generated - see our ethics statement for details.
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: ChatGPT, OpenAI, Microsoft, AI
92 Percent Mobile Phone Users Get Unsolicited Calls Daily Even on DND: Survey
Google to Spread Misinformation Prebunking in Europe; Initiative in the Works in India
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

Advertisement

Follow Us
© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »