Do Our Faces Deserve the Same Protection as Our Phones?
In other scenarios, the benefits are more far-reaching. In Washington, DC, the National Human Genome Research Institute is using facial recognition to help physicians diagnose a disease known as DiGeorge syndrome, or 22q11.2 deletion syndrome. It’s a disease that more often afflicts people who are African, Asian, or Latin American. It can lead to a variety of severe health problems, including damage to the heart and kidneys. But it also often manifests itself in subtle facial characteristics that can be identified by computers using facial recognition systems, which can help a doctor diagnose a patient in need.
These scenarios illustrate important and concrete ways that facial recognition can be used to benefit society. It’s a new tool for the 21st century.
Like so many other tools, however, it can also be turned into a weapon. A government might use facial recognition to identify every individual attending a peaceful rally, following up in ways that could chill free expression and the ability to assemble. And even in a democratic society, the police might rely excessively on this tool to identify a suspect without appreciating that facial recognition, like every technology, doesn’t always work perfectly.
For all these reasons, facial recognition easily becomes intertwined with broader political and social issues and raises a vital question: What role do we want this form of artificial intelligence to play in our society?
A glimpse of what lies ahead emerged suddenly in the summer of 2018, in relation to one of the hottest political topics of the season. In June, a gentleman in Virginia, a self-described “free software tinkerer,” also clearly had a strong interest in broader political issues. He had posted a series of tweets about a contract Microsoft had with the US Immigration and Customs Enforcement, or ICE, based on a story posted on the company’s marketing blog in January. It was a post that frankly everyone at the company had forgotten. But it says that Microsoft’s technology for ICE passed a high security threshold and will be deployed by the agency. It says the company is proud to support the agency’s work, and it includes a sentence about the resulting potential for ICE to use facial recognition.
In June 2018, the Trump administration’s decision to separate children from parents at the southern US border had become an explosive issue. A marketing statement made several months earlier now looked a good deal different. And the use of facial recognition technology looked different as well. People worried about how ICE and other immigration authorities might put something like facial recognition to work. Did this mean that cameras connected to the cloud could be used to identify immigrants as they walked down a city street? Did it mean, given the state of this technology, with its risk of bias, that it might misidentify individuals and lead to the detention of the wrong people? These were but two of many questions.
By dinnertime in Seattle, the tweets about the marketing blog were tearing through the internet, and our communications team was working on a response. Some employees on the engineering and marketing teams suggested that we should just pull the post down, saying, “It is quite old and not of any business impact at this point.” Three times, Frank Shaw, Microsoft’s communications head, advised them not to take it down. “It will only make things worse,” he said. Nonetheless, someone couldn’t resist the temptation and deleted part of the post. Sure enough, things then got worse and another round of negative coverage followed. By the next morning, people had learned the obvious lesson and the post was back up in its original form.