By continuing to use this site, you agree to our updated Privacy Policy and Terms of Use. ×





Can You Protect a Photo from Facial Recognition?

10th May, 2019 | Biometric Privacy | Entropic

Photo by Rye Jessen on Unsplash

In a previous article, we discussed how governments & the private sector are amassing photos of individuals from a diverse array of sources into "facial profiles", for the purposes of building and tuning better facial recognition systems. A facial profile that consists of one or more raw, unobfuscated photos of an individual, can be used to more dynamically identify an individual under varying conditions, for instance a bystander in a YouTube video or a public CCTV video feed.

Your photos are now a commodity. More than 100 commercial vendors are now offering facial recognition systems. Many of them are focussed on building global databases of people that contain photos and other personal information gathered from open sources, such as YouTube, LinkedIn & FaceBook.

Each additional photo that you post or share online becomes subject to collection by vendors & governments for the purposes of building better facial recognition systems, with few effective regulations to govern how this unique, immutable biometric information of individuals is handled.

Pixelation & Blurring are No Longer Enough

Advances in facial recognition are rendering older methods of anonymizing photos, such as pixelation and blurring techniques ineffective. It is no longer sufficient to pixelate or blur a photo. AI is catching up.

There are now emerging methods of reconstructing faces in photos that work by artificially rebuilding the fidelity of photos that have undergone this type of transformation. One interesting example is EnhanceNet, developed by researchers at the Max-Planck Institute for Informatics in Saarbrücken Germany, that can artificially reconstruct photos that have undergone facial pixelation. Photo by Azamat Zhanisov on Unsplash

This method depends on a face having have been anonymized using traditional methods of facial pixelation & blurring, that have been available on social networking, video sharing sites, and in software applications for many years. EnhanceNet works by rebuilding realistic textures back into an obfuscated photo, based on data from the original pixelated photo, coupled with a massive set of "training photos".

Emerging methods of person recognition, including joint research between the University of Texas at Austin and Cornell University are less dependent on faces specifically to identify subjects in photos, and more dependent on their surrounding features. They can therefore detect individuals whose faces have been anonymized by tools commonly available on social networking and video sharing services.

To achieve this however, these methods must have a reference set of raw, un-obfuscated training photos of an individual, so that a neural network can compare whole body features in the reference set, against an obfuscated photo.

Mitigating partial and whole body profiling, and subsequently this type of reverse engineering involves an individual being more mindful about releasing unprotected photos of themselves that can be used to build these types of profiles that can quickly train AI to detect individuals in scenarios, even when their faces have been anonymized.

So, is it realistic to continue safely sharing photos with others online anymore, or is this now a pipe dream?

If we are to continue using the Internet as a place for sharing photos, we need an ideal way to transform our raw digital photographs, "immunizing" them from analysis & profiling by machines, before we share them. Ideally, this "anonymized" photo would retain the fidelity of the original photo as much as possible, so it can still be appreciated by people - as originally intended.

Humans still have the best senses when it comes to recognition, but machines are catching up. We must continually hone in on how machine-based facial recognition systems identify individuals, and take the necessary corrective actions (anonymizations) to the portion of the photo that is machine-recognizable.

We must also continually strive to ensure that the methods we use to anonymize facial features in a photo cannot be reverse engineered.

Keeping Tabs on The Evolution of Facial Recognition


An emerging privacy challenge with protecting shared photos in this way, is about knowing where is the demarcation of machine to human capabilities - how good are machines vs. humans - at any one time.

Due to the now heavy competition in the facial recognition solutions space, vendors must constantly strive to improve their recognition capabilities to better detect people under different visual scenarios, and to be better than their competitors. As a result, we must also understand the demarcation of machine to machine capabilities - how good are competing industry vendors across different nations, since nations have eminent domain over the companies who are based within their country, and the vendors can be compelled to cooperate with their own governments.

Defending Against Facial Recognition

Defending against facial recognition, and in fact defending against the misuse of artificial intelligence in other fields, is an iterative process. We need to continuously revalidate existing methods of anonymization, against what current technologies are capable of.

Once we understand the dependencies of a given facial recognition algorithm - the methods by which the facial identification & comparison takes place - we can more efficiently target and anonymize the facial features (landmarks) of a photo that they depend upon, while endeavoring to minimize damage to the fidelity of the original photograph.

While anonymizing a photo, for the sake of privacy & reliability, we would ideally detect facial features using machine-based facial identification technologies that are based on the user's device. These technologies should not rely directly on the cloud to perform their job, and thus won't bleed telemetry or even leak entire photographs, while they identify facial features within a photo.

For convenience & usability, this process of “anonymization” should be as automatic as possible, since it’s likely we have accumulated many photos on our device over time. While we might only have one profile photo to share on on a job hunting or conference site, we might have many to share on social media.

Finally, the form factor used to fortify photographs should be as efficient & portable as possible, so as to empower us to conveniently protect our photos when we need to. This will further help to mitigate the upload of unprocessed/raw photos, and thus help us to defend against this amassment & exploitation of our unique biometric features.

We have released the first in a series of anti-facial recognition technologies, that can help individuals defend their privacy by anonymizing their photos, making them resistant to machine-based analysis & facial recognition.

Conclusion

Individuals have the right to share information about themselves with other people, without having to worry about how their information will be amassed, exploited, neglected, and ultimately end up in the hands of the wrong people, and machines. The continuous evolution of machine-based facial recognition, will continue to fuel major advancements in cyberprivacy, and specifically counter-biometric technologies.

We believe that these upcoming technologies are a productive step in leveling the playing field in the evolving & largely unregulated field of biometrics, that is currently lacking an appropriate evolution of laws and technology defenses to protect the privacy of individuals.

If you have any feedback, questions, or suggestions, please let us know.

Acknowledgements:
Photo by Rye Jessen on Unsplash
Photo by Photo by Azamat Zhanisov (modified) on Unsplash