Google CEO Says Fears About Artificial Intelligence Are 'Very Legitimate'

Advertisement
By Tony Romm, Drew Harwell, Craig Timberg, The Washington Post | Updated: 13 December 2018 10:00 IST
Highlights
  • New AI tools require companies to set ethical guardrails: Pichai
  • "I think tech has to realise it just can't build it, and then fix it"
  • Pichai said he is optimistic about the technology's long-term benefits

Google CEO Sundar Pichai, head of one of the world's leading artificial intelligence companies, said in an interview this week that concerns about harmful applications of the technology are "very legitimate" - but the tech industry should be trusted to responsibly regulate its use.

Speaking with The Washington Post on Tuesday afternoon, Pichai said that new AI tools - the backbone of innovations such as driverless cars and disease-detecting algorithms - require companies to set ethical guardrails and think through how the technology can be abused.

"I think tech has to realise it just can't build it, and then fix it," Pichai said. "I think that doesn't work."

Advertisement

Tech giants have to ensure that artificial intelligence with "agency of its own" doesn't harm humankind, Pichai said. He said he is optimistic about the technology's long-term benefits, but his assessment of the potential risks of AI parallels that of some tech critics who say the technology could be used to empower invasive surveillance, deadly weaponry and the spread of misinformation. Other tech executives, like SpaceX and Tesla founder Elon Musk, have offered more dire predictions that AI could prove to be "far more dangerous than nukes."

Google's AI technology underpins a range of initiatives, from the company's controversial China project to the surfacing of hateful conspiratorial videos on its YouTube subsidiary - a problem he vowed to address in the coming year. How Google decides to deploy its AI has also sparked recent employee unrest.

Google CEO Sundar Pichai Spars With US Lawmakers on Bias, Privacy

Pichai's call for self-regulation followed his testimony in Congress, where lawmakers threatened to impose limits on technology in response to its misuse, including as a conduit for spreading misinformation and hate speech. His acknowledgement about the potential threats posed by AI was a critical assertion because the Indian-born engineer often has touted the world-shaping implications of automated systems that could learn and make decisions without human control.

Advertisement

Pichai said in the interview that lawmakers around the world are still trying to grasp AI's effects and the potential need for government regulation. "Sometimes I worry people underestimate the scale of change that's possible in the mid-to-long term, and I think the questions are actually pretty complex," he said. Other tech giants, including Microsoft, recently have embraced regulation of AI - both by the companies that create the technology and the governments that oversee its use.

But AI, if handled properly, could have "tremendous benefits," Pichai explained, including helping doctors detect eye disease and other ailments through automated scans of health data. "Regulating a technology in its early days is hard, but I do think companies should self-regulate," he said. "This is why we've tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation."

Advertisement

Pichai, who joined Google in 2004 and became chief executive 11 years later, in January called AI "one of the most important things that humanity is working on." He said it could prove to be "more profound" for human society than "electricity or fire." But the race to perfect machines that can operate on their own has rekindled familiar fears that Silicon Valley's corporate ethos - "move fast and break things," as Facebook once put it - could result in powerful, imperfect technology eliminating jobs and harming average people.

Within Google, its AI efforts also have created controversy: The company faced heavy criticism earlier this year due to its work on a Defense Department contract involving AI that could automatically tag cars, buildings and other objects for use in military drones. Some employees resigned due to what they called Google's profiting off the "business of war."

Advertisement

Asked about the employee backlash, Pichai told The Post that his workers were "an important part of our culture." "They definitely have an input, and it's an important input; it's something I cherish," he said.

In June, after announcing that Google wouldn't renew the contract next year, Pichai unveiled a set of AI-ethics principles that included general bans on developing systems that could be used to cause harm, damage human rights or aid in "surveillance violating internationally accepted norms."

The company faced earlier criticism for releasing AI tools that could be misused in the wrong hands. Google's release in 2015 of its internal machine-learning software, TensorFlow, has helped accelerate the wide-scale development of AI, but it has also been used to automate the creation of lifelike fake videos that have been used for harassment and disinformation.

Google and Pichai have defended the release by saying that keeping the technology restricted could lead to less public oversight and prevent developers and researchers from progressing its capabilities in beneficial ways.

"Over time, as you make progress, I think it's important to have conversations around ethics (and) bias, and make simultaneous progress," Pichai said during his interview with The Post.

"In some sense, you do want to develop ethical frameworks, engage noncomputer scientists in the field early on," he said. "You have to involve humanity in a more representative way, because the technology is going to affect humanity."

Pichai likened the early work to set parameters around AI to the academic community's efforts in the early days of genetics research. "Many biologists started drawing lines on where the technology should go," he said. "There's been a lot of self-regulation by the academic community, which I think has been extraordinarily important."

The Google executive said it would be most essential around the development of autonomous weapons, an issue that's rankled tech executives and employees. In July, thousands of tech workers representing companies including Google signed a pledge against developing AI tools that could be programmed to kill.

Pichai also said he found some hateful, conspiratorial YouTube videos described in a Washington Post story on Tuesday "abhorrent," and he indicated that the company would work to improve its systems for detecting problematic content. The videos, which had been watched millions of times on YouTube since appearing in April, discussed baseless allegations that Democrat Hillary Clinton and her longtime aide Huma Abedin had attacked, killed and drank the blood of a girl.

Pichai said he had not seen the videos, which he was questioned about during the congressional hearing, and he declined to say whether YouTube's shortcomings in this area were a result of limits in the detection systems or in policies for evaluating whether a particular video should be removed. But he added, "You'll see us in 2019 continue to do more here."

Pichai also portrayed Google's efforts to develop a new product for the government-controlled Chinese internet market as preliminary, declining to say what the product might be or when it would come to market - if ever.

Dubbed Project Dragonfly, the effort has caused backlash among employees and human-rights activists who warn about the possibility of Google assisting government surveillance in a country that tolerates little political dissent. When asked whether it's possible that Google might make a product that allows Chinese officials to know who searches for sensitive terms, such as the Tiananmen Square massacre, Pichai said it was too soon to make any such judgments.

"It's a hypothetical," Pichai said. "We are so far away from being in that position."

© The Washington Post 2018

 

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.

Further reading: Google, Sundar Pichai, AI
Advertisement

Related Stories

Popular Mobile Brands
  1. Terminally Ill Fan May Be Able to Play GTA 6 Ahead of Release
  2. Realme Neo 8 Key Specifications Confirmed Ahead of January 22 Launch
  3. Viruses and Bacteria Evolve Differently in Space, ISS Study Finds
  4. Honor Magic 8 Pro Air, Magic 8 RSR Porsche Design Launched At These Prices
  5. Here's How Much the Realme P4 Power Could Cost in India
  6. Samsung Galaxy S26 Ultra Colourways Spotted in Leaked SIM Tray Images
  7. Here's Why Asus is Reportedly Halting Its Smartphone Launches
  8. Global RAM Shortage Is Now Causing GPU, Storage Drive Prices to Skyrocket
  9. Infinix Note Edge Debuts With MediaTek Dimensity 7100 , 6,500mAh Battery
  10. Sarvam Maya OTT Release: Know Everything About This Malayalam Fantasy Drama Film
  1. Global RAM Shortage Is Reportedly Causing GPU, Storage Drive Prices to Skyrocket
  2. Viruses and Bacteria Evolve Differently in Space, ISS Study Finds
  3. Rockstar Games Said to Have Granted a Terminally Ill Fan's Wish to Play GTA 6
  4. Oppo K15 Turbo Series Tipped to Feature Built-in Cooling Fans; Oppo K15 Pro Model Said to Get MediaTek Chipset
  5. Samsung Galaxy Z Fold 8 Said to Feature Dual Ultra-Thin Glass OLED Panel to Reduce Crease Visibility
  6. Honor Magic 8 Pro Air Launched Alongside Honor Magic 8 RSR Porsche Design: Price, Specifications
  7. Realme Neo 8 Key Specifications Including 8,000mAh Battery, Ultrasonic Fingerprint Sensor Confirmed
  8. Astronomers Find Massive Iron-Rich Feature Lurking Under the Ring Nebula
  9. Asus Reportedly Halts Smartphone Launches ‘Temporarily’ to Focus on AI Robots, Smart Glasses
  10. JioHotstar Announces Monthly Subscription Plans Across Mobile, Super, and Premium Tiers
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.