Fraud Management & Cybercrime , Governance & Risk Management , Privacy

Facebook Shuts Down Facial Recognition Feature

Facebook Will Delete More Than 1 Billion Facial Profiles
Facebook Shuts Down Facial Recognition Feature

Facebook plans to shut down its facial recognition system, saying the regulatory landscape is unclear and citing ongoing concerns about the effects on society of using such systems.

See Also: AI in Cybersecurity: The Promise and Reality

The feature has been employed for more than a decade to automatically identify faces in photos. Users, however, had to opt-in to the system as of 2019, and more than one-third of Facebook users did. As a result, Facebook has since collected more than 1 billion facial profiles, which it says it will delete.

"There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use," writes Jerome Pesenti, vice president of artificial intelligence for Meta, which is Facebook's parent company. "Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate."

The facial recognition feature allowed people to be notified when they're in a photo. It also enabled the Automatic Alt Text feature, which visually impaired people use, to name people in its photo descriptions.

EFF: Thumbs Up

But the privacy implications of facial recognition technology have alarmed civil liberties advocates. Also, law enforcement agencies' use of facial recognition has raised questions of racial bias and erroneous identifications.

Facebook's move was welcomed by the Electronic Frontier Foundation, which has advocated for a ban on government use of facial recognition technology. Facebook's move "is just one very large domino in the continued fight against face recognition technology," writes Matthew Guariglia, an EFF policy analyst.

"Companies will continue to feel the pressure of activists and concerned users so long as they employ invasive biometric technologies like face recognition," Guariglia writes. "This is especially true for corporate systems that process users’ biometrics without their freely given opt-in consent, or that store the data in ways that are vulnerable to theft or easily accessible to law enforcement."

Facebook's facial recognition technology was subject to a long running class action lawsuit that was resolved in January 2020. The result was the largest cash settlement ever for a privacy lawsuit, with Facebook paying a $550 million settlement.

The class action suit contended that Facebook collected the biometric face prints of users without their consent for the face-tagging feature for photos. Facebook automatically collected biometric face data until 2019 when it announced that it would no longer do that by default.

In 2008, Illinois put into effect its Biometric Information Privacy Act, which required companies to tell people if their biometric data was being collected. Companies were also required to tell people why the data was being collected and how long it would be stored and used, and get written permission if the data was to be shared with third parties (see: Facebook Settles Facial Recognition Lawsuit for $550 Million).

Banned by Cities

One of the most alarming revelations involving facial recognition technology was the rise of Clearview AI. The company designed software that allowed for someone to upload a photo of someone else, and the software would return public photos of the person along with links to where those photos appeared.

Clearview amassed a database of some 3 billion photos scraped from services such as Facebook, YouTube and Venmo, according to The New York Times. Hundreds of law enforcement agencies and private companies used the system.

But the use of Clearview has raised questions about consent since the photos were scraped, and it prompted action by lawmakers. In August 2020, U.S. Sens. Jeff Merkley and Bernie Sanders introduced legislation that would restrict how biometric data can be collected and used. The National Biometric Information Privacy Act of 2020 would require permission to collect such data, ban the sale of that data, and require that the privacy and security of the data be maintained.

Several cities and states have also banned the use of facial recognition technology. Vermont banned the use of facial recognition by law enforcement agencies in October 2020. Cities including Baltimore, San Francisco, Berkeley, Boston and Portland have restricted its use, according to this tracker from Fight for the Future.


About the Author

Jeremy Kirk

Jeremy Kirk

Executive Editor, Security and Technology, ISMG

Kirk was executive editor for security and technology for Information Security Media Group. Reporting from Sydney, Australia, he created "The Ransomware Files" podcast, which tells the harrowing stories of IT pros who have fought back against ransomware.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing govinfosecurity.com, you agree to our use of cookies.