Project Details
Privacy-preserving Audio Features for Clustering and Classification in Acoustic Sensor Networks
Applicant
Professor Dr.-Ing. Rainer Martin
Subject Area
Electronic Semiconductors, Components and Circuits, Integrated Systems, Sensor Technology, Theoretical Electrical Engineering
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
from 2016 to 2023
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 282835863
The ubiquitous use of portable smart devices has led to a wide dissemination of acoustic sensors. Obviously, the presence of these sensors and the machine learning algorithms serving them provide a high risk to privacy, especially when they are connected in an acoustic sensor network (ASN). In this case, the privacy concerns could be eased if the sensors were designed to fulfill their assigned task with maximum performance but without conveying more information than necessary. This requires a good balance between the assigned task’s performance (utility) and the restriction of the amount oftask-extraneous information that is revealed (privacy).In this second phase of the research unit "Acoustic Sensor Networks" we will further investigate the aforementioned balance between utility and privacy in the context of clustering, classification and enhancement tasks in ASNs. While the basic proof-of-concept was established in the first phase we will now focus on challengingreal-world scenarios including speech, music and noise sources. This means handling network node clustering under dynamic network configurations, performing complex classification and detection tasks based on multi-source scenarios while considering network-wide utility and privacy measures. We will thus expand and gain a deeper functional understanding of the behavior and limitations of privacy-preservingfeature representations.Based on results of the first project phase we will further investigate deep neural network-based feature extraction approaches (e.g., adversarial, variational information, siamese networks) which turned out to be highly successful tools for obtaining audio featuresfor classification with privacy constraints. In addition, we will place emphasis on the practically important case of feature extraction in the presence of speech signals where we will derive features, e.g., for the classification of noise sources in smart home applications that obfuscate speech information and thus preserve privacy.
DFG Programme
Research Units
Subproject of
FOR 2457:
Acoustic Sensor Networks
Co-Investigator
Professor Dr.-Ing. Reinhold Häb-Umbach