Microsoft Plans To Eliminate Face Analysis Tools in Push for ‘Responsible AI’

Microsoft Plans To Eliminate Face Analysis Tools in Push for ‘Responsible AI’ (



from the moving-forward dept.

For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and shouldn’t be sold. From a report: Acknowledging some of those criticisms, Microsoft said on Tuesday that it planned to remove those features from its artificial intelligence service for detecting, analyzing and recognizing faces. They will stop being available to new users this week, and will be phased out for existing users within the year. The changes are part of a push by Microsoft for tighter controls of its artificial intelligence products. After a two-year review, a team at Microsoft has developed a “Responsible AI Standard,” a 27-page document that sets out requirements for A.I. systems to ensure they are not going to have a harmful impact on society.

The requirements include ensuring that systems provide “valid solutions for the problems they are designed to solve” and “a similar quality of service for identified demographic groups, including marginalized groups.” Before they are released, technologies that would be used to make important decisions about a person’s access to employment, education, health care, financial services or a life opportunity are subject to a review by a team led by Natasha Crampton, Microsoft’s chief responsible A.I. officer.

The UNIX philosophy basically involves giving you enough rope to
hang yourself. And then a couple of feet more, just to be sure.


Read More