Airports Are Embracing Facial Recognition. Should We Be Worried?

Airports Are Embracing Facial Recognition. Should We Be Worried?

Waiting in a sluggish line to speak with a cranky border agent may soon be a thing of the past: Imagine gaining entry to another country within 15 seconds, no human interaction nor physical documents required. This hypothetical situation already exists with the Smart Tunnel, which uses facial and iris recognition technology to verify passengers’ identities via 80 cameras, and processes the data via artificial intelligence. Dubai International Airport piloted the Smart Tunnel in 2018 — the world’s first technology of its kind.

While it may not always appear ripped from a sci-fi movie, you’re likely to have already undergone some sort of biometric screening process in U.S. airports. After the 9/11 attacks, the Department of Homeland Security (DHS) and its interior agencies ramped up security measures to confirm travelers’ identities and snuff out terrorism. In 2004, U.S. airports began screening the faces and fingers of passengers flying into the country. DHS now uses facial recognition in part to track whether people have overstayed their visas. 

Pushing Biometric Boundaries

But in recent years, airports and other travel locations have kicked things up a notch. Airlines are now collaborating with federal officials to reduce lines and circumvent human inefficiency whenever possible. As of last year, DHS had already used facial recognition on over 43 million people throughout the country at border crossings and departing cruise ships, among other locations.

Travelers can also pay for the CLEAR program — the first iteration came into existence shortly after 2001 — that allows them to skip security lines for a fee. At the futuristic kiosks, customers’ biometric features such as fingers and irises are converted into a unique, encrypted code that represents their identity.

You can currently find advanced biometric security at travel spots including Boston’s Logan International Airport, for example, where JetBlue made history in 2017 by becoming the first airline to self-board passengers via facial recognition. 

Last month, Delta embraced facial recognition to streamline operations at its domestic terminal at Atlanta’s Hartsfield-Jackson International airport — travelers who meet certain criteria can choose to drop off their bags, breeze through security and board via facial recognition scans. The airline rolled out a similar option for the airport’s international travelers in 2018.

The ultimate goal: gate-free border crossings, boardings and flight check-ins. Soon enough, your body could serve as your primary form of ID.

How Artificial Intelligence Reads You

Biometric scans attempt to match a stored passport, driver’s license or other type of identification image with a live photo captured by on-site cameras. The algorithm used by the government’s Traveler Verification Service, or TVS, comes from a company called NEC. It compares your live photo with a gallery of “templates,” or mathematical representations, generated from images that people have shared with the federal government for travel purposes, such as passport or visa images. CBP has also offered TVS to airlines for processes like boarding. If the TVS matching method fails, passengers are redirected to Customs and Border Protections (CBP) officers for a secondary inspection. 

“This stuff is never going to be perfect, and the most important thing is what you do when it messes up,” says Thomas P. Keenan, a computer scientist at Canada’s University of Calgary and author of Technocreep: The Surrender of Privacy and the Capitalization of Intimacy.

While it may seem like the government has suddenly taken on the role of biometric Big Brother, U.S. residents and visitors have submitted biometric data since the early 20th century — though it took the form of “soft” biometrics such as hair and eye color, along with weight and height.

But this iteration brings a significantly higher degree of technological sophistication and, as critics point out, your highly detailed face scan could potentially be abused by corporations, government agencies or hackers.

Privacy Concerns 

While facial recognition scans at airports are technically optional for U.S. citizens (but not foreign nationals), a 2020 report by the U.S. Government Accountability Office clarified that CBP has “not consistently provided complete information in privacy notices or ensured notices were posted and visible to travelers.” 

“If you want to get meaningful consent, then you do need to at least publicize what you’re doing and have clear signs and labels,” says Matthew Kugler, an associate professor of law at Northwestern University who has researched biometric privacy and cybercrime. The government should also promptly inform passengers how they can opt out, he adds.

And although proponents of biometric security screenings commonly point to their high degree of accuracy, such percentages can be misleading. In 2017, Senators Edward Markey and Mike Lee pointed out that, even with a 96 percent accuracy rate, this technology will still falsely flag one in 25 travelers. The process currently matches correctly over 98 percent of the time, according to a CBP spokesperson.

But any errors could disproportionately harm people of color: Facial recognition algorithms may deliver false positives up to 100 times more frequently for the faces of Asian and Black people than those of white people, according to a 2019 paper by the National Institute of Standards and Technology. 

It’s also hard to tell where our data goes after we depart. In 2018, no airlines nor airport authorities had told CBP that they planned to retain the biometric data they independently  collect for other purposes. But as of May 2020, CBP had only investigated a single airline partner, out of over 20, regarding their long-term data usage. It’s unclear whether they’ve since conducted any audits, and the agency has not yet responded to Discover‘s question.

As for its own biometric information, all photos are deleted from CBP’s cloud platform within 12 hours. But non-citizens’ images are transferred to a threat-monitoring system for up to 14 days, and CBP can keep photos in a broader database for up to 75 years. While the government can already access many foreign nationals’ fingerprints and photos, as Kugler points out, improved facial recognition represents a significant advance in targeting undocumented people. 

“Immigration enforcement is run out of Homeland Security, which is also the agency in charge of securing our airports,” Kugler says. “We’re already in the right agency, and in a way you could say it’s merely more effectively enforcing the laws we already have … but it’s perhaps too effective.”

Even if an entity claims to have deleted someone’s photo from a facial recognition system, they could still theoretically access a hash, or an algorithm-derived number that could be used to retrieve it, Keenan points out. But DHS claims their numbers created from travelers’ images can’t be reverse-engineered to do so.

DHS will soon store its biometric data on Amazon Web Services’ GovCloud, along with that of agencies such as ICE, the Department of Defense and the Central Intelligence Agency. The DHS can technically share sensitive biometric information with other government entities, according to their 2020 report. The agency already works with the departments of Justice and State on the controversial Automated Targeting System, which uses facial recognition to single out passengers they perceive as threats. 

Law enforcement officials have already abused people’s facial scans to identify them at a political protest. It’s been well-documented that police use Clearview AI software, which scrapes people’s data from social media, to do just that. DHS works with Clearview on “border and transportation security,” GAO noted in a 2021 paper. But the software isn’t used specifically for airport entry-exit programs, a CBP spokesperson told BuzzFeed last year. 

CLEAR, meanwhile, states on its website that the company saves biometric data collected at airports, stadiums and other venues and utilizes it beyond the purposes of authenticating over 5 million users’ identities. It may even share such data for marketing purposes, according to reporting by OneZero, and aims to serve as a personal identifier when customers use their credit and insurance cards, along with other common interactions.

Regardless of how they use your data, both public and private forces are vulnerable to cyber attacks. Government contractors, in particular, have exposed sensitive information in the past: In May 2019, CBP experienced a data breach in which hackers stole thousands of license-plate images and ID photos from a subcontractor who wasn’t technically authorized to hold onto that information.

Such concerns have prompted cities to ban facial recognition technology to varying degrees. This year, Portland, forbade the surveillance software “in places of public accommodation” — an ordinance that technically prohibits the practice at airports. Similar legislation in Boston, San Francisco, and Oakland, California, only applies to certain local government offices. 

In the future, Keenan wouldn’t be surprised if airports employ biometric screening methods that might seem dystopian today. Researchers are currently looking into techniques that analyze characteristics including people’s facial expressions, walking patterns, and even odor. Eventually, security checkpoints could even analyze a person’s brain waves, Keenan notes. Airports have tried invasive security tactics before: He cites the “nude scanners” that were phased out in 2013.

“I have no question that some researcher somewhere … is thinking, ‘Are there brain wave [machines] we can get?’” Keenan says. “I can certainly see having this technology and deploying it in airports and people accepting it because they’re going to go, ‘I want to be safe when I fly. I don’t care if they read my brain.’”