Who gets held accountable when a facial recognition algorithm fails? And how?

Ellen Broad
3 min readOct 5, 2017

Earlier this week Australian Prime Minister Malcolm Turnbull confirmed that Premiers and Chief Ministers are being asked to share their state and territory driver’s licence data for a national facial recognition database. Today the Premiers and Chief Ministers agreed to do so.

Fraunhofer Face Finder, by Steve Jurvetson (CC-BY)

It’s a hard proposal to argue against.

Law enforcement is important. And facial recognition technology isn’t new anymore. It’s already being used for a variety of purposes within the private and public sectors. Hell, when Apple’s iPhone X comes out later this year, facial recognition will become part of the devices we carry every day in our back pockets.

And as the Prime Minister points out, images of people’s faces aren’t difficult to find online these days either.

They’re already being scraped and used to train facial recognition algorithms — not just for national security but for other potentially more harmful purposes. The Georgetown Law Center for Privacy and Technology in the US has estimated that half of all US adults — 112 million people — are already enrolled in unregulated facial recognition networks.

So maybe it’s too late to stop facial recognition happening. Let’s talk about how desperately facial recognition is in need of regulation instead.

We know facial recognition technology is capable of bias and error.

In the US, studies have shown that facial recognition algorithms are consistently less accurate identifying African American faces. Joy Buolamwini, an MIT Media Lab researcher, has talked eloquently about the challenges she faced getting a robot she trained using widely available facial recognition software to recognise her face. She’s black. Stories about facial recognition technology mistakenly identifying Asian faces as people blinking, tagging black people as primates and failing to register black faces in frame at all have gone viral.

There are a few reasons for these kinds of errors. Datasets used to train facial recognition algorithms might not have enough diverse faces within them. People designing the systems might inadvertently incorporate their own bias. Default camera settings don’t properly expose dark skin.

When we talk about using Australian driver’s licence photos to build a national facial recognition database, this potential for error matters.

Indigenous Australians, for example, make up 3.3% of the population, almost certainly less than 3% of people with driver’s licences, but 28% of the total prison population. The Prime Minister has talked about using facial recognition technology in shopping malls and airports.

What safeguards are being put in place to make sure Indigenous people — or any other racial minority — are not disproportionately exposed to error? How are agencies currently measuring error within facial recognition algorithms? What is the error rate?

Within the coming weeks, the Australian government will unveil its “consumer first” approach to data policy. As consumers, we’ll potentially have greater control over how data about us is collected, stored and used.

What about as citizens?

When should we be able to request that images of us be removed or modified in a facial recognition database? When should we be informed that images of us are *in* a facial recognition database?

Today the PM noted that the national facial recognition database will be accessible by “anyone with a lawful purpose” — for purposes beyond law enforcement. Should information about who is accessing that data be open? What standards should be set around the use of facial recognition algorithms trained on that data, and permissable margins of error?

If we accept that people have some basic rights over data that is about them, and that data users and system designers have some basic responsibilities when using data about people, then we can’t ignore these in the context of facial recognition.

Whether it’s facial recognition for law enforcement or any other purpose.

If anything, that it could be used for law enforcement makes clearly establishing the rights of people and responsibilities of system designers and data users more important. The stakes are that much higher if we get it wrong.

--

--

Ellen Broad

ellenbroad.com. 3A Institute, Australian National University. Data ethics | open data | responsible technology. Board game whisperer @datopolis.