Defeat data poisoning attacks on facial recognition applications
Metadata[+] Show full item record
In the modern era, facial photos are used for a wide array of applications, from logging into a smartphone to bragging about a weekend getaway. With the vast amount of use cases for facial images, adversaries will attack these applications for profit. This dissertation focuses on two major applications of facial photos: facial authentication and deepfakes. Facial authentication has become increasingly popular on personal devices. Due to the ease of use, it has great potential to be widely deployed for web-service authentication in the near future in which people can easily log on to online accounts from different devices without memorizing lengthy passwords. However, the growing number of attacks targeting machine learning, especially Deep Neural Networks (DNN), which is commonly used for facial recognition, imposes big challenges on the successful roll-out of such web-service facial authentication. We demonstrate a new data poisoning attack, called replacement data poisoning, which does not require the adversary to have any knowledge of the server-side and simply needs a handful of malicious photo injections to enable an attacker to impersonate the victim in existing facial authentication systems. We then propose a novel defensive approach called DEFEAT that leverages deep learning techniques to automatically detect such attacks. Our experiments using real-world datasets achieve a detection accuracy of over 90 percent. Deepfakes target specific individuals to cause shame or misinformation. With the spread of fake news, deepfakes have become incredibly prevalent in recent years. With deepfakes, an adversary could have photographic or even video-graphic \proof" of someone, such as a politician, committing a devious act or saying untrue words. Our deepfake work consists of two parts. First, we propose a label ipping data poisoning attack targeting deepfake detectors. With over a 99 percent poison success rate in most cases, this attack demonstrates the devastating effects a data poisoning attack can have on deepfake detectors and how important a need to defend against this assault is. Our second contribution revolves around defending deepfake detectors from such an attack. We propose several defense strategies, most notably a convolutional neural network (CNN) based strategy to detect poisoned images. Our CNN-based approach achieves a greater than 98 percent poison detection rate while keeping the number of false positives to a minimum with a precision rate of over 99 percent in most cases.