Image recognition isn’t as powerful as you’d think, and some flaws can cause problems. Here’s an overview.

Image recognition is increasingly becoming part of our daily lives. It is traditionally associated, for example, with the detection of faces in an image, or the identification of people in a photo.

But today, deep neural networks are also capable of recognizing every element in a scene and generating a caption for it. Take face recognition, for example. There are several machine learning applications that analyze images containing faces: face detection and facial comparison. A detection system is designed to answer the question: “Does this image contain a face? And if so, where is it?”. Whereas a comparison system is designed to answer the question: “Does the face in one image match the face in another image?”. A comparison system takes the image of a face and predicts whether that face matches other faces in a supplied database1. Comparison systems are designed to compare and predict possible matches.

Regulation

Today, image recognition is a technology that is playing an increasingly important role in our daily lives, and according to Statistica, 60% of the French are confident in artificial intelligence2.

However, beyond their trust in the technology, many French people are concerned about how it will be used. Particularly when it comes to the storage and possible resale of their personal data. This poses a paradoxical situation for manufacturers, since artificial intelligence needs this information to function properly.

Indeed, from a technical point of view, in this context, the development of a machine learning algorithm will rely on the use of face libraries. From a legal point of view, regulations governing the protection of personal data are relatively strict when it comes to the processing of biometric data. In principle, it is prohibited and can only be carried out if the user has given explicit consent, or if such processing is necessary for reasons of public interest3. These regulations vary from country to country, and France is one of the countries with the strictest legislation. As a result, French manufacturers are faced with a distortion of competition due to the absence of a legal framework enabling them to test their solutions in real-life conditions on French soil.

Challenges

The other concern highlighted is that, despite the confidence-inspiring performance of these systems, it is still possible to fool them4. Almost constantly, new techniques are being developed to trick image or even sound recognition algorithms in new ways. Each of these techniques can therefore constitute a flaw in the system’s operation. But for each of these attacks, solutions can be put in place to counter them, triggering a quasi-Darwinian struggle between those who find the loopholes and those who correct them.

The vulnerability of Deep Learning models was highlighted in 2013 in a paper entitled “Intriguing properties of neural networks”, the result of research by Christian Szegedy et al5. This type of vulnerability naturally causes fears that slow down adoption in highly regulated sectors. Notably in fields such as the automotive industry for driverless cars, or healthcare for cancer diagnostics. It has been illustrated by researchers at Cornwell University that the modification of a single pixel can totally distort recognition; in their example, a boat could be recognized as a car.

According to the same researchers, audio recordings can also be faked by the application of a single disturbance such as imperceptible background noise. Clearly, attacks of this kind compromise the security of systems based on AI bricks, with potentially far-reaching consequences. For example, driverless vehicles could be involved in accidents, illicit or illegal content could bypass content filters, or biometric authentication systems could be manipulated to allow unauthorized access (e.g. via the iPhone’s Face ID)6.

Technologies are being developed to trick an algorithm after it has successfully learned to recognize us. But some go even further, suggesting that we can directly prevent AI algorithms from learning to recognize us. To achieve this, recently released free software modifies our images in a way that is imperceptible to the naked eye, in order to force the artificial intelligences that learn from them to focus on patterns other than those on our face. As a result, it can no longer correctly identify faces in a new photograph in which the face appears. This software, called “Fawkes”, uses a technology called “cloaking”. After processing by Fawkes, the images that the AI algorithm learns are said to be “poisoned”, because instead of detecting the usual features that identify a person, the system will focus on other patterns that have been added by the software7. This tool was developed by a team of researchers at the SAND Lab, University of Chicago, USA. It meets a need among the general public who, as mentioned above, are wary of applications that collect (and use) the personal data they share. With this software, the researchers hope to distort the databases and thus enable users to anonymize their photos.

To counter these offensives, attack/defense models have been suggested by developers to strengthen their resistance. These are sets of techniques that work on reference models and can be used in the most popular Deep Learning design tools. In the process, they test the system and, after analyzing the feedback, correct it. The aim is to help developers build more robust models. According to Patrick Grother, director of AI research8, it has been proven that countering these AI vulnerabilities by using a more diverse database leads to better results. While there are currently several methods of defense, none is yet fully satisfactory, even if they are constantly evolving.

Conclusion

Therefore, whether intentional or not, the AI of image recognition systems can sometimes be fooled. However, techniques exist to strengthen them. Industrialists have understood this, and their aim is to protect neural networks and make them more robust, in order to avoid malicious attacks, and to make AI performance more reliable, for a better user experience.

There are still tensions linked to the legal framework, notably concerning the use of user data. Governments need to strike a balance between measures to maintain user confidence, and others to encourage innovation. As things stand, everything is constantly evolving and improving. And if new malfunctions are detected, we either have the solutions in hand to counter them, or the tools to make them.

  1. Présentation de la détection des visages et de la comparaison des visages – Amazon Rekognition ↩︎
  2. Niveau de confiance des Français en l’intelligence artificielle 2019 | Statista ↩︎
  3. Reconnaissance faciale : les enjeux éthiques et juridiques d’une technologie qui fascine et inquiète | IHEMI ↩︎
  4. Des systèmes de reconnaissance faciale auraient été trompés par des masques et photos ↩︎
  5. [1312.6199] Intriguing properties of neural networks ↩︎
  6. Hacker les modèles de deep learning facilement – ActuIA ↩︎
  7. Ce logiciel gratuit modifie vos photos pour que vous ne soyez pas identifié par reconnaissance faciale ↩︎
  8. La reconnaissance faciale, pas si fiable ↩︎