Google’s Co Head of Ethical AI Timnit Gebru Says She Was Fired for Email
Google’s Co-Head of Ethical AI Timnit Gebru Says She Was Fired for Email
The email and the firing were the culmination of about a week of wrangling over Google’s request that Gebru retract an AI ethics paper she had co-written.
By Dina Bass, Shelly Banjo, and Mark Bergen, Bloomberg | Updated: 4 December 2020 10:59 IST
Gebru had inquired on Twitter whether anyone was working on regulation to protect ethical AI researchers
Gebru was fired with a message that Google couldn’t meet her demands
Google Walkout For Real Change posted a petition in support of Gebru
Gebru has been an outspoken critic of lack of diversity at tech companies
Timnit Gebru, a co-leader of the Ethical Artificial Intelligence team at Google, said she was fired for sending an email that management deemed “inconsistent with the expectations of a Google manager.”
The email and the firing were the culmination of about a week of wrangling over the company's request that Gebru retract an AI ethics paper she had co-written with six others, including four Google employees, that was submitted for consideration for an industry conference next year, Gebru said in an interview Thursday. If she wouldn't retract the paper, Google at least wanted the names of the Google employees removed.
Gebru asked Google Research vice president Megan Kacholia for an explanation and told her that without more discussion on the paper and the way it was handled she would plan to resign after a transition period. She also wanted to make sure she was clear on what would happen with future, similar research projects her team might undertake.
“We are a team called Ethical AI, of course we are going to be writing about problems in AI,” she said.
Meanwhile, Gebru had chimed in on an email group for company researchers called Google Brain Women and Allies, commenting on a report others were working on about how few women had been hired by Google during the pandemic. Gebru said that in her experience, such documentation was unlikely to be effective as no one at Google would be held accountable. She referred to her experience with the AI paper submitted for the conference and linked to the report.
“Stop writing your documents because it doesn't make a difference,” Gebru wrote in the email, which was obtained by Bloomberg News. “There is no way more documents or more conversations will achieve anything.” The email was published earlier by tech writer Casey Newton.
The next day, Gebru said she was fired by email, with a message from Kacholia saying that Google couldn't meet her demands and respects her decision to leave the company as a result. The email went on to say that “certain aspects of the email you sent last night to non-management employees in the brain group reflect behaviour that is inconsistent with the expectations of a Google manager,” according to tweets Gebru posted, which also specifically called out Jeff Dean, who heads Google's AI division, for being involved in her removal.
I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I've been immediately fired :-)
“It's the most fundamental silencing,” Gebru said in the interview, about Google's actions in regard to her paper. “You can't even have your scientific voice.”
Representatives for Mountain View, California-based Google didn't reply to multiple requests for comment.
Google protest group Google Walkout For Real Change posted a petition in support of Gebru on Medium, which has gathered more than 400 signatures from employees at the company and more than 500 from academic and industry figures.
The research paper in question deals with possible ethical issues of large language models, a field of research being pursued by OpenAI, Google and others. Gebru said she doesn't know why Google had concerns about the paper, which she said was approved by her manager and submitted to others at Google for comment.
Google requires all publications by its researchers to be granted prior approval, Gebru said, and the company told her this paper had not followed proper procedure. The report was submitted to the ACM Conference on Fairness, Accountability, and Transparency, a conference Gebru co-founded to be held in March.
The paper called out the dangers of using large language models to train algorithms that could, for example, write tweets, answer trivia, and translate poetry, according to a copy of the document. The models are essentially trained by analysing language from the Internet, which doesn't reflect large swaths of the global population not yet online, according to the paper. Gebru highlights the risk that the models will only reflect the worldview of people who have been privileged enough to be a part of the training data.
Gebru, an alumni of the Stanford Artificial Intelligence Laboratory, is one of the leading voices in the ethical use of AI. She is well-known for her work on a landmark study in 2018 that showed how facial recognition software misidentified dark-skinned women as much as 35 percent of the time, whereas the technology worked with near precision on White men.
She has also been an outspoken critic of the lack of diversity and unequal treatment of Black workers at tech companies, particularly at Alphabet's Google, and said she believed her dismissal was meant to send a message to the rest of Google's employees not to speak up.
Under fire Tensions were already running high at Google's research division. Following the death of George Floyd, a Black man who was killed during an arrest by White police officers in Minneapolis in May, the division held an all-hands meeting where Black Googlers spoke about their experiences at the company. Many people broke down crying, according to people who attended the meeting.
Gebru disclosed the firing Wednesday night in a series of tweets, which were met with support from some of her Google co-workers and others in the field.
Gebru is “the reason many next generation engineers, data scientists and more are inspired to work in tech,” wrote Rumman Chowdhury, who formerly served as the head of Responsible AI at Accenture Applied Intelligence and now runs a company she founded called Parity.
Google, along with other US tech giants including Amazon and Facebook, has been under fire from the government for claims of bias and discrimination and has been questioned about its practices at several committee hearings in Washington.
A year ago, Google fired four employees for what it said were violations of data-security policies. The dismissals highlighted already escalating tensions between management and activist workers at a company once revered for its open corporate culture. Gebru took to Twitter at the time to support those who lost their jobs.
For the web search giant, Gebru's alleged termination comes as the company faces a complaint from the National Labor Relations Board for unlawful surveillance, interrogation or suspension of workers.
Earlier this week Gebru inquired on Twitter whether anyone was working on regulation to protect ethical AI researchers, similar to whistle-blower protections. “With the amount of censorship and intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?” she wrote on Twitter.
Is there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection? Because with the amount of censorship & intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?
Rare voice Gebru was a rare voice of public criticism from inside the company. In August, Gebru told Bloomberg News that Black Google employees who speak out are criticised even as the company holds them up as examples of its commitment to diversity. She recounted how co-workers and managers tried to police her tone, make excuses for harassing or racist behaviour, or ignore her concerns.
When Google was considering having Gebru manage another employee, she said in an August interview, her outspokenness on diversity issues was held against her, and concerns were raised about whether she could manage others if she was so unhappy. “People don't know the extent of the issues that exist because you can't talk about them, and the moment you do, you're the problem,” she said at the time.
--With assistance from Helene Fouquet and Gerrit De Vynck.