Project Details
Machine Learning and Digital Watermarking in Adversarial Environments
Applicant
Professor Dr. Konrad Rieck
Subject Area
Security and Dependability, Operating-, Communication- and Distributed Systems
Term
from 2017 to 2022
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 393063728
Machine learning algorithms are increasingly used in security-critical applications, such as for detection of malicious software or the control of autonomous vehicles. In these applications, it is crucial that the employed algorithms are not evaded or deceived by an adversary. Unfortunately, most learning algorithms are not robust against attacks and, in the last years, the research field of adversarial machine learning has been established to develop novel attack and defense mechanisms for machine learning.Concurrent to this work, the research area of digital watermarking has tackled similar problems. Digital watermarking aims at marking media, such as images and audio, such that the watermark cannot be removed or extracted by an adversary. Although the research goals of machine learning and watermarking are fundamentally different, there are surprising parallels in the corresponding attack strategies. In both areas, the adversary aims at evading a detection system: (a) in the case of machine learning by deceiving a classification and (b) in the case of digital watermarking by rendering the watermark undetectable.So far, this similarity of both areas have not gained attention in the research communities and it is the goal of this project to systematically study, formalise and join research concepts where possible. Based on a formal framework, attacks as well as defenses from one research area shall be transferred to the other and vice versa. This shall enable the development of novel security mechanisms for both areas and initiate novel directions of joint research.
DFG Programme
Research Grants