[-] Show simple item record

dc.contributor.advisorLin, Daneng
dc.contributor.authorCole, Dalton Russelleng
dc.date.issued2021eng
dc.date.submitted2021 Summereng
dc.description.abstractIn the modern era, facial photos are used for a wide array of applications, from logging into a smartphone to bragging about a weekend getaway. With the vast amount of use cases for facial images, adversaries will attack these applications for profit. This dissertation focuses on two major applications of facial photos: facial authentication and deepfakes. Facial authentication has become increasingly popular on personal devices. Due to the ease of use, it has great potential to be widely deployed for web-service authentication in the near future in which people can easily log on to online accounts from different devices without memorizing lengthy passwords. However, the growing number of attacks targeting machine learning, especially Deep Neural Networks (DNN), which is commonly used for facial recognition, imposes big challenges on the successful roll-out of such web-service facial authentication. We demonstrate a new data poisoning attack, called replacement data poisoning, which does not require the adversary to have any knowledge of the server-side and simply needs a handful of malicious photo injections to enable an attacker to impersonate the victim in existing facial authentication systems. We then propose a novel defensive approach called DEFEAT that leverages deep learning techniques to automatically detect such attacks. Our experiments using real-world datasets achieve a detection accuracy of over 90 percent. Deepfakes target specific individuals to cause shame or misinformation. With the spread of fake news, deepfakes have become incredibly prevalent in recent years. With deepfakes, an adversary could have photographic or even video-graphic \proof" of someone, such as a politician, committing a devious act or saying untrue words. Our deepfake work consists of two parts. First, we propose a label ipping data poisoning attack targeting deepfake detectors. With over a 99 percent poison success rate in most cases, this attack demonstrates the devastating effects a data poisoning attack can have on deepfake detectors and how important a need to defend against this assault is. Our second contribution revolves around defending deepfake detectors from such an attack. We propose several defense strategies, most notably a convolutional neural network (CNN) based strategy to detect poisoned images. Our CNN-based approach achieves a greater than 98 percent poison detection rate while keeping the number of false positives to a minimum with a precision rate of over 99 percent in most cases.eng
dc.description.bibrefIncludes bibliographical references.eng
dc.format.extentxvi, 154 pages : illustrations (color)eng
dc.identifier.urihttps://hdl.handle.net/10355/90990
dc.identifier.urihttps://doi.org/10.32469/10355/90990eng
dc.languageEnglisheng
dc.publisherUniversity of Missouri--Columbiaeng
dc.titleDefeat data poisoning attacks on facial recognition applicationseng
dc.typeThesiseng
thesis.degree.disciplineComputer science (MU)eng
thesis.degree.levelDoctoraleng
thesis.degree.namePh. D.eng


Files in this item

[PDF]

This item appears in the following Collection(s)

[-] Show simple item record