Despite steaming ahead with the self-imposed rules, the company said industrywide regulation is necessary.
Microsoft
Corp. is planning to implement self-designed ethical principles
for its facial recognition technology by the end of March, as it
urges governments to push ahead with matching regulation in the
field.(Business
Standard)
The
company in December called for new legislation to govern artificial
intelligence software for recognizing faces, advocating for human
review and oversight of the technology in some critical cases, as a
way to mitigate the risks of biased outcomes, intrusions into privacy
and democratic freedoms.
“We
do need to lead by example and we’re working to do that,”
Microsoft President and Chief Legal Officer Brad Smith said in an
interview, adding that some other companies are also putting similar
principles into place.
Smith
said the company plans by the end of March to “operationalize”
its principles, which involves drafting policies, building governance
systems and engineering tools and testing to make sure it’s in line
with its goals. It also involves setting controls for the company’s
global sales and consulting teams to prevent selling the technology
in cases where it risks being used for an unwanted purpose.
The
use of facial recognition software by law enforcement, border
security, the military and other government agencies has stirred
concerns about the risks of bias and mass surveillance. Research has
shown that some of the most popular products make mistakes and
perform worse on people with darker skin. Microsoft, Amazon.com Inc.
and Alphabet Inc.’s Google have all faced protests from employees
and advocacy groups over the the idea of selling AI
software to government agencies or the police.
“It
would certainly restrict certain scenarios or uses,” Smith said of
the principles, adding that Microsoft wouldn’t necessarily reject
providing governments with the technology. The company only wants to
prevent law enforcement from using the technology for ongoing
surveillance of a specific individual without the preferred
safeguards, he said.
The
company has turned down contracts for that reason, he said. One was a
case that Smith said would have amounted to public surveillance in a
national capital “in a country where we were not comfortable that
human rights would be protected.” Another was deployment by a law
enforcement agency in the U.S. that “we thought would create an
undue risk of discrimination.”
Asked
whether Microsoft would rule out working with Chinese law
enforcement, especially in light of new rules to judge citizens on
their social behavior, Smith said “it would definitely raise
important questions in China.” He said that in any case it appears
that Beijing is more interested in procuring facial-recognition
technology from local firms instead of American ones.
Despite
steaming ahead with the self-imposed rules, the company said
industrywide regulation is necessary.
No comments:
Post a Comment