Project Details
Learning to Sample for Visual Computing
Applicant
Professor Dr. Rüdiger Westermann
Subject Area
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term
since 2021
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 413611294
The overarching goals of the collaborative research unit are to understand the generic features of different instances of one class of objects that are required to let a network learn to recognize such objects, and to apply this understanding to generate synthetic training data on which a neural network can be trained to effectively perform classification and reconstruction tasks. The group of PI Westermann will in particular focus on aspects related to network analysis and rendering, aiming for an improved understanding of how the features that are needed to generate task-specific visual representations can be learned, and how feature-aware neural model representations can be generated. These goals shall be achieved by developing a network-based processing pipeline that is trained end-to-end to learn neural feature descriptors. The processing pipeline learns directly the visual representations that are required for network-based inference tasks, instead of learning explicit feature descriptors or the data itself. It should eventually be used to synthesize training data for domain transfer.
DFG Programme
Research Units
Subproject of
FOR 2987:
Learning and Simulation in Visual Computing