This story is available exclusively to Insider subscribers.
Become an Insider and start reading now.
- Google on Thursday announced engineering VP Marian Croak would oversee its work in responsible AI.
- In an email, Croak said the reorganization would give Google an “overarching focus” on this work.
- The reorg comes after weeks of internal turmoil following the ousting of the ethicist Timnit Gebru.
- Visit the Business section of Insider for more stories.
Google on Thursday announced it had appointed Marian Croak, the vice president of engineering, to oversee the company’s artificial-intelligence research after weeks of turmoil within the division.
In an email sent to staff, which was obtained by Insider, Croak said the reorganization would give Google an “overarching focus” on its work in responsible AI, the field of study to develop artificial intelligence that is beneficial to society and to avoid harmful outcomes.
She also acknowledged the employees who would be part of what Google is calling a “center of expertise” for responsible AI.
Croak, who is one of only a few Black executives at Google, will now oversee teams from various divisions at Google that work on responsible AI, including the ethical-AI team formerly coled led by Timnit Gebru.
In December, Gebru said she had been fired by Google for pushing back against management. Google continues to dispute this claim and says it accepted her resignation, but the events rankled employees both inside and outside the research group.
In a statement to Insider on Thursday, Gebru called Croak a “highly accomplished” scientist whom she admired but said it was “incredibly hurtful to see her legitimizing what Jeff Dean and his subordinates have done to me and my team.”
Members of the ethical-AI team also expressed frustration that they weren’t consulted about the reorganization ahead of its announcement.
Both Croak and Dean emailed employees on Thursday announcing the news. Neither email addressed Gebru or Margaret Mitchell, the other colead of Google’s ethical-AI team who is locked out of the corporate network and under investigation for events related to Gebru’s departure.
But in a video posted to Google’s blog, Croak said: “There’s quite a lot of conflict right now within the field, and it can be polarizing at times, and what I’d like to do is just have people have the conversation in a more diplomatic way.”
Here’s the full email she sent to staff.
Hello Research team,
I’m excited and honored to lead the teams focused on research of such critical importance for our products and the world. Since I joined Google six years ago, I’ve been continually impressed by the boundary-pushing scientific advances and product innovations coming from the Research organization, and am thrilled to now be a part of it.
Throughout my career, I’ve always felt strongly about ensuring that technology has a positive impact on the world. To do that, we need to focus more on the many ways technology affects people and continually work toward making technology fairer, safer, and more inclusive for everyone. This is what’s at the heart of our Responsible AI work in Research and indeed all of our AI work across the company.
Our work in this space spans a broad spectrum, from pure, fundamental research to the applied work we do in partnership with PAs. I’m eager to deepen our partnership with PAs to maximize and scale the impact of the discoveries we make in Research and the tools and techniques we invent. The products Google builds touch the lives of billions of people, so by responsibly infusing the latest and best research into those products, we can really improve people’s daily experiences. In addition to doubling down on our work with PAs, I remain deeply committed to the cutting-edge research we do across the Responsible AI landscape and look forward to expanding our collective contributions to the broader AI research community.
All of us in the Research org have a role to play in advancing Responsible AI and helping the company live up to our AI principles, but I want to particularly acknowledge the teams and individuals that will be formally joining me in this new center of expertise: Accessibility, AI for Social Good, Algorithmic Fairness in Health, Brain Fairness, Ethical AI, PAIR, SIR Responsible ML, Responsible ML Infra, Responsible AI PgMs, and key partners from the Responsible Innovation team. We’re a diverse collection of teams, but together in this new organization, we’ll be united with an overarching focus on steering the development and use of technology toward positive impact for our users and the world at large.
I’ve met some of you already, and I’m looking forward to meeting so many more of you. I’ve been deeply humbled by your talent, thoughtfulness, and empathy for one another: Thank you for the warm welcome — I already feel at home. We’re hosting a Q&A tomorrow to talk through this new organization (you’ll see the details in a calendar invite, including livestream and Dory) — I hope to see you there.