Project Details
Development of a Synthesis Model for Whole-body Vibrations for Multimodal Vehicle Scene Presentation
Applicant
Professor Dr.-Ing. Ercan Altinsoy
Subject Area
Acoustics
Term
from 2016 to 2021
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 318948524
Humans perceive the world a multimodal way. This has to be taken into account by future reproduction systems by addressing all modalities in order to immerse the user in the virtual environment. The main focus in such systems has to be the user's perception. Therefore, it is mandatory that the stimuli presented for each modality are integrated to a plausible scene. The plausibility illusion refers to the content of the presented virtual scene and is evoked when the scene matches the user s expectations. The evaluation of plausibility is closely related to the perceptual process of categorization. There is evidence that categorization is influenced by language. The better a scene matches the expected properties (e.g. "shaky") of a scene category (e.g. "driving on a cobblestone road") the more plausible it should be perceived. Building on a known approach for the auditory modality this research project aims at developing a synthesis model for the tactile modality allowing to create plausible scenes from the user s expectations on any scene with whole-body vibrations. First, a method for measuring user s expectations needs to be developed. Therefore, everyday vibrations, as for example occurring in vehicles, need to be analyzed for prototypical signal patterns (sinusoidal signals, amplitude modulated signals, etc.). Subsequently, these signal patterns are presented to test subjects and the most important perceptually relevant descriptors will be determined. Afterwards, these descriptors have to be examined for their suitability for describing whole-body vibrations. In the next step these signal describing descriptors will be checked for independence. Starting from this reduced vector of perceptual attributes a synthesis model can be created. For the verification and iterative improvement of the model real scenes in vehicles will be perceptually evaluated as described above and compared to synthesized whole-body vibrations in the multimodal context. The influence of the single modalities on the vector of perceptual attributes is going to be examined. The synthesis model would simplify the design of tactile virtual realities. If it is sufficient to evoke certain perceptual attributes on the user the vibratory playback system might be greatly simplified. The acquired knowledge could also be applied for the optimization of tactile user interfaces by evoking expected perceptual attributes.
DFG Programme
Research Grants