Timnit Gebru, one of the best-known AI researchers today and co-lead of an AI ethics team at Google, no longer works at the company. Details are still being gathered, but according to Gebru, she was fired Wednesday for sending an email to “non-management employees that is inconsistent with the expectations of a Google manager.” VentureBeat reached out to Gebru, a Google spokesperson, and Google AI chief Jeff Dean for comment. This story will be updated if we hear back.
I was fired by @JeffDean for my email to Brain women and Allies. My corp account has been cutoff. So I’ve been immediately fired :-)
— Timnit Gebru (@timnitGebru) December 3, 2020
In a tweet published days before leaving Google, Gebru questioned whether there’s regulation that protects people from divulging whistleblower law protection for members of the AI ethics community.
Gebru leaves Google the same day that the National Labor Review Board (NLRB) filed complaints against Google that found that the company spied on and illegally fired two employees involved in labor organizing.
— kathryn spiers (@computerfemme) December 3, 2020
Gebru is known for some of the most influential works in algorithmic accountability and transparency and combating algorithmic bias with the potential to automate oppression. Gebru is a cofounder of the Fairness, Accountability, and Transparency (FAccT) conference and Black in AI, a group that hosts community gatherings and mentors young people of African descent. Black in AI holds its annual workshop Monday at NeurIPS, the largest AI research conference in the world.
Before coming to Google, Gebru joined Algorithmic Justice League founder Joy Buolamwini and created the Gender Shades project to assess the performance of facial recognition systems from major vendors like IBM and Microsoft. That work concluded that facial recognition tends to work best on white men and worst for women with a dark skin tone. That research and subsequent work by Buolamwini and Deborah Raji in 2019 have been highly influential among lawmakers deciding how to regulate the technology and people’s attitudes about the threat posed by algorithmic bias.
While working at Microsoft Research, she coauthored “Datasheets for Datasets,” a paper that recommends including a set of standard information with datasets in order to provide data scientists with context before they decide to use that data for training an AI model. “Datasheets for Datasheets” would later act as motivation for the creation of model cards.
As a Google employee, Gebru joined Mitchell, Raji, and others in writing a paper in 2019 about model cards, a framework for providing benchmark performance information about a model for machine learning practitioners to evaluate before using an AI model. Google Cloud began providing model cards for some of its AI last year, and this summer introduced the Model Card Toolkit for developers to make their own model cards.
This summer, Gebru and her former colleague Emily Denton led a tutorial about fairness and ethics in computer vision at the Computer Vision and Pattern Recognition (CVPR), and shortly after got into a public spat with Facebook director of AI research Yann LeCun about AI bias.
Member of the AI community and others have at times have referred to Gebru as someone actively trying to save the world. Earlier this year, Gebru led a was included in the book “Good Night Stories for Rebel Girls: 100 Immigrant Women Who Changed the World,” a book that was released in October.
More to come.