Microsoft seeks to restrict abuse of its facial recognition AI
HONG KONG, CHINA – The logo of Microsoft is seen in a smartphone. (Photo by Alvin Chan/SOPA Images/LightRocket via Getty Images)
Microsoft calls for laws to prevent bias in facial recognition AI
The empire strikes back: Microsoft returns to the top of the world
Microsoft CEO sells $36m in stock, starts trading plan
Microsoft threatens to pull Gab services over anti-semitic posts
Microsoft is planning to implement self-designed ethical principles for its facial recognition technology by the end of March, as it urges governments to push ahead with matching regulation in the field.
The company in December called for new legislation to govern artificial intelligence software for recognising faces, advocating for human review and oversight of the technology in some critical cases, as a way to mitigate the risks of biased outcomes, intrusions into privacy and democratic freedoms.
“We do need to lead by example and we’re working to do that,” Microsoft President and chief legal officer Brad Smith said in an interview, adding that some other companies are also putting similar principles into place.
Subscribe to Fin24’s newsletter here
Smith said the company plans by the end of March to “operationalise” its principles, which involves drafting policies, building governance systems and engineering tools and testing to make sure it’s in line with its goals. It also involves setting controls for the company’s global sales and consulting teams to prevent selling the technology in cases where it risks being used for an unwanted purpose.
READ: Microsoft calls for laws to prevent bias in facial recognition AI
The use of facial recognition software by law enforcement, border security, the military and other government agencies has stirred concerns about the risks of bias and mass surveillance. Research has shown that some of the most popular products make mistakes and perform worse on people with darker skin.
Microsoft, Amazon.com and Alphabet’s Google have all faced protests from employees and advocacy groups over the the idea of selling AI software to government agencies or the police.
“It would certainly restrict certain scenarios or uses,” Smith said of the principles, adding that Microsoft wouldn’t necessarily reject providing governments with the technology. The company only wants to prevent law enforcement from using the technology for ongoing surveillance of a specific individual without the preferred safeguards, he said.
The company has turned down contracts for that reason, he said. One was a case that Smith said would have amounted to public surveillance in a national capital “in a country where we were not comfortable that human rights would be protected.” Another was deployment by a law enforcement agency in the US that “we thought would create an undue risk of discrimination.”
Asked whether Microsoft would rule out working with Chinese law enforcement, especially in light of new rules to judge citizens on their social behavior, Smith said “it would definitely raise important questions in China.” He said that in any case it appears that Beijing is more interested in procuring facial-recognition technology from local firms instead of American ones.
Despite steaming ahead with the self-imposed rules, the company said industrywide regulation was necessary.
“You never want to create a market that forces companies to choose between being successful and being responsible and unless we have a regulatory floor there is a danger of that happening,” Smith said.