• Lydia Kiros

Google’s Firing of Leading Black A.I. Researcher is a Blow to Minority Inclusion in Tech

In late November, prominent Google researcher Dr. Timnit Gebru sent out an email to her colleagues expressing frustration over Google’s response to efforts by her and other employees to increase minority hiring and draw attention to bias in artificial intelligence. Google’s response? Dr. Gebru was fired just days later while on vacation. The giant tech company’s response brings into question how much they really care about diversity and minority representation.


Photo: Dr. Timnit Gebru by Cody O’Loughlin for New York Times


Dr. Gebru’s email, sent to a Google Brain diversity and inclusion mailing list, depicted a disturbing scenario that spoke to the hostile environment created by Google’s leadership when it came to efforts to increase minority representation. “There is no incentive to hire 39% women: your life gets worse when you start advocating for underrepresented people. You start making the other leaders upset,” the email read. “There is no way more documents or more conversations will achieve anything.”


Her exasperation stemmed from the company’s treatment of a research paper she had written with six other researchers, four of them at Google. The paper focused on flaws in a new line of language technology, including a system known as BERT built by Google that serves as the foundation for its search engine. But BERT has been criticized for its flaw of picking up biases weaved in digitized information. These systems learn the unpredictable changes of language by analyzing large amounts of text, including thousands of books, Wikipedia entries and other online documents. However, because this text includes biased and sometimes hateful language, the technology may in turn generate biased and hateful language. Researchers worry that the people who are building artificial intelligence systems may be building their own biases into the technology. Over the past several years, many public experiments have revealed that the systems often interact differently with people of color- likely due to the fact that they are underrepresented among the developers who create those systems.


Video Courtesy: Slate


After she and the other researchers submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper from the conference or remove her name along with the names of the other Google employees. She refused to do so without further discussion and in the email she sent to the company, said that she would resign after an appropriate amount of time if the company could not explain why it wanted to retract the paper and answer any other concerns. According to Dr. Gebru, the company responded by saying it could not meet her demands and that her resignation was accepted immediately. Her access to company email and other services was immediately revoked. Jeff Dean, who oversees Google’s A.I. Work, including that of Dr. Gebru and her team, said Google respected “her decision to resign,” in a note to his employees. He also said that the paper did not acknowledge recent research showing ways of mitigating bias in such systems. Regardless, it seemed Mr. Dean and Google failed to have that conversation with Dr. Gebru. “It was dehumanizing,” Dr. Gebru said in an interview with the New York Times. “They may have reasons for shutting down our research. But what is most upsetting is that they refuse to have a discussion about why.”


Dr. Gebru’s untimely departure from Google comes at a time when A.I. technology is playing an increasingly larger role in almost every aspect of Google’s business. It’s future is inextricably tied to artificial intelligence as the breakthrough technology to make the next generation of services and devices smarter and more capable, whether with its voice-enabled digital assistant or its automated placement of advertising for marketers. Google has repeatedly committed to eliminated bias in its systems. The problem, Dr. Gebru said, is that most of the people making the final decisions are men. In fact, she was one of only 1.6% of Black women employed at Google. “They are not only failing to prioritize hiring more people from minority communities, they are quashing their voices,” she said.



Indeed many have come out in support of Dr. Gebru in the wake of her termination with #IStandWithTimnit surfacing on Twitter. Many expressed their disappointment and frustration, especially other Black women and men in tech. Google, like other tech companies, has faced criticism for not doing enough to resolve the lack of women and racial minorities in its workforce. Close to 7,000 Googlers, academic, industry, and civil society have signed a petition in support of Dr. Gebru with a list of demands directed at Google Research leadership. Her termination sends a discouraging message to people of color in tech, especially women, that their voice and contribution do not matter and are easily disposable. Can a company truly claim ethical and moral high ground in its products and company culture if it cannot handle the critiques by one of their own? Dr. Gebru’s departure highlights the growing tension between Google’s outspoken work force and its senior management, while raising concerns over the company’s efforts to build fair and reliable technology.


Dr. Gebru, 37, born and raised in Ethiopia, has made groundbreaking contributions in A.I. In 2018, while a researcher at Stanford University, she helped write a paper that is widely seen as a turning point in efforts to identify and remove bias in artificial intelligence. Later that year, she joined Google and helped build the Ethical A.I. team serving as a co-leader. She has also worked on computer vision problems in fine-grained object recognition; used large-scale image sets to gain sociological insight; conducted audits of biased facial recognition systems which have influenced real-world regulation; designed standards and processes to mitigate ethical issues with datasets and models; and more.



Beyond her research, she is one of the founders of the ACM Conference on Fairness, Accountability, and Transparency (FAccT), one the most prestigious and well-known conferences related to machine learning ethics. As co-founder of Black In AI, she helped increase the number of Black attendants at NeurIPS (a conference on Neural Information Processing Systems) from just 6 in 2016 to 500 in 2017. After more than half of Black in AI speakers could not obtain visas to Canada for NeursIPS 2018, she successfully advocated to have ICLR 2020 held in Ethiopia which would have been one of the first major AI conferences to be held on the African continent. It unfortunately had to switch to remote due to COVID-19.


Dr. Gebru’s vast accomplishments and ground breaking work speak for themselves and demonstrate the value she brings to the table. As systems evolve it is vital that the same biases and prejudices do not follow into the future. As technology increasingly dominates and dictates our lives, it is vital to have Black women and men like Dr. Gebru involved to ensure its fair, ethical and unbiased creation. There should be “nothing about us without us.”

0 comments
AmplifyAfrica_Logo-05.png
FOLLOW US

U.S:

Los Angeles, CA.

 

Africa:

Lagos, Nigeria 

  • Facebook - White Circle
  • Twitter - White Circle
  • Instagram - White Circle

Copyright © 2019 Amplify Africa. All rights reserved.