Microsoft Plans To Eliminate Face Analysis Tools in Push for ‘Responsible AI’

For years, activists and academics have raised concerns that a face-to-face analysis program that claims to be able to identify a person’s age, gender and emotional state may be biased, unreliable or invasive – and should not be sold.

Acknowledging some of the allegations, Microsoft said Tuesday that it plans to remove those features from its counterfeit intelligence service for detecting, analyzing and identifying faces. It will cease to be available to new users this week, and will be removed from existing users within this year.

The changes are part of Microsoft’s impetus for tight control of its counterfeit spyware. After a two-year review, the team at Microsoft has created a “Responsive AI Level,” a 27-page document that sets out the requirements for AI systems to ensure it will not have a negative impact on society.

These provisions include ensuring that systems provide “legitimate solutions to problems that are designed to solve” and “equal quality of service to identified demographic groups, including marginalized groups.”

Prior to release, technologies that could be used to make important decisions about personal access to work, education, health care, financial services or life opportunities could be reviewed by a team led by Natasha Crampton, Microsoft’s responsible AI chief executive officer. .

There was a great deal of concern for Microsoft about the emotional recognition tool, which recorded a person’s expression such as anger, contempt, disgust, fear, happiness, neutrality, sadness or surprise.

“There is a huge amount of cultural and geographical and individual differences in the way we express ourselves,” she said. Crampton said. That led to concerns about dependence, as well as more serious questions about whether “facial expression is a reliable indicator of your inner emotional state,” he said.

With age and gender analysis tools removed – along with other tools for detecting facial features such as hair and smile – it may be necessary to translate visible images to blind or blind people, for example, but the company decided it would be difficult to make available profile tools. to the general public, Ms. Crampton said.

In particular, he added, the so-called system gender classification was binary, “and that is not in line with our values.”

Microsoft will also install new controls on its face recognition feature, which can be used to perform identity verification or search for a specific person. Uber, for example, uses software in its software to verify that the driver’s face matches the identifier in the driver’s account file. Developers wishing to use Microsoft’s face recognition tool will need to submit an access request and explain how they plan to use it.

Users will also be required to submit applications and explain how to use other potentially abusive AI systems, such as Special Neural Sounds. The service can provide human voice recording, based on a person’s speech sample, so that writers, for example, can create their own audio versions to read their audio books in languages ​​they do not speak.

Because of the potential for misuse of tools – to create the impression that people have said things they did not say – speakers must go through a series of steps to ensure that the use of their voices is approved, and records include watermarks that can be recognized by Microsoft. .

“We are taking concrete steps to live up to our AI principles,” said Ms Crampton, who has worked as a lawyer at Microsoft for 11 years and joined the AI ​​ethics group in 2018. “It will be a great journey.”

Microsoft, like other technology companies, has been skeptical of its artificial intelligence products. In 2016, it released a chatbot on Twitter, called Tay, which was created to learn “conversation comprehension” from interacting users. The reply soon began to evoke racist and annoying tweets, and Microsoft was forced to remove it.

In 2020, researchers found that speech-to-speech tools developed by Microsoft, Apple, Google, IBM and Amazon worked well for Black people. Microsoft’s system was the best in the group but failed to identify 15 percent of words for white people, compared to 27 percent for Black people.

The company had collected various speech data to train its AI system but had not understood how different languages ​​could be. So it hired a sociologist from the University of Washington to describe the types of languages ​​that Microsoft needed to know about. It went beyond population and regional diversity in how people speak in both formal and informal contexts.

“Thinking about running as a reason to determine how a person speaks is a little misleading,” she said. Crampton said. “What we have learned from consulting an expert is that really a great variety of things affect a variety of languages.”

Bi. Crampton said the journey to rectify that diversity of word-for-word writing has helped inform the guidelines set in the company’s new standards.

“This is an important period of regulation for AI,” he said, highlighting the proposed European norms to set rules and limits on the use of artificial intelligence. “We hope to be able to use our standard to try and contribute to a heated, critical debate that needs to have standards that technology companies should adhere to.”

A heated debate about the possible effects of AI has been going on for years in the technological community, fueled by mistakes and mistakes that have real effects on people’s lives, such as algorithmic principles that determine whether people benefit from prosperity or not. The Dutch tax authorities mistakenly took advantage of the upbringing of children from needy families when a flawed regime punished people of two nations.

Automated face recognition software has been particularly controversial. Last year, Facebook launched its ten-year system of identifying people in photography. The vice president of a fake intelligence company mentioned “many issues about the location of face recognition technology in the community.”

Several black men have been wrongly arrested after flawed face recognition matches. And in 2020, at the same time the Black Lives Matter protests after the George Floyd police massacre in Minneapolis, Amazon and Microsoft issued a moratorium on the use of their face recognition products by police in the United States, citing clear laws against it. use was required.

Since then, Washington and Massachusetts have passed regulations requiring, among other things, judicial oversight over police use of facial recognition tools.

Bi. Crampton said Microsoft thought it would begin to offer its software to police in states with the rule of law but has decided, for now, not to do so. He said that could change as the legal situation changed.

Arvind Narayanan, Princeton professor of computer science and renowned AI expert, said companies could back down from face-to-face analytics technologies because “they look better, unlike other types of AI that can be skeptical but we don’t. We must feel in the bones us. “

Companies may also realize that, at least for now, some of these systems have no commercial value, he said. Microsoft could not say how many users it had for the face analysis features it removes. Bw. Narayanan predicted that companies would be less likely to abandon other invasive technologies, such as targeted advertising, which praised people for choosing the best ads to show them, because they were “money bulls.”

Leave a Comment