Project Details
Projekt Print View

Investigating multimodal interaction in storytelling

Subject Area Individual Linguistics, Historical Linguistics
Term from 2019 to 2022
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 414153266
 
Conversational storytelling interaction is inherently multimodal. Participants in storytelling use a "'bundle' of interacting behavioural events or non-events from different communicational subsystems (or 'modalities') simultaneously transmitted and received as a single (usually auditory-visual) Impression" (Crystal 1969: 97) and their "verbal and nonverbal behavior forms a unified whole" (Arndt & Janney 1987: 4).The proposed project aims to investigate multimodality in storytelling interaction based on an interdisciplinary conception that integrates corpus-linguistic, discourse-analytic, and conversation-analytic approaches to investigating multimodal storytelling and capitalizes on their synergies. The integration will be implemented in an innovative multimodal corpus, the Storytelling Interaction Corpus.The corpus comprises data from video recordings of spontaneous conversational interaction to be conducted in cooperation with Birmingham City University (BCU). Unlike corpus-linguistic multimodal corpora that use only orthographic transcription, the data will be transcribed in accordance with conversation-analytic (Jeffersonian) conventions to ensure rich multimodal detail in the transcripts. The conversation-analytic transcripts will be converted into XML transcripts using XTranscript, a piece of software developed specifically for this project in collaboration with BCU. In addition to conversation-analytic transcription and annotation, the data will receive corpus-linguistic annotation in the form of Part-of-Speech tagging to capture morpho-syntactic functions of verbal actions as well as discourse-analytic annotation to grasp discourse structure, discourse presentation, and discourse roles. Further, in collaboration with the Chair of German Linguistics Professor Auer at Freiburg University, quantitative data on gaze behavior will be collected, and integrated into XML, by using eye-tracking technology. The foundations for the statistical examination of multimodal storytelling practices will be provided by the XML structure of the corpus. By using the XML querying languages XPath and XQuery, relevant data of unlimited size and complexity can be addressed and extracted (cf. Rühlemann et al. 2015).Based on the interdisciplinary approach, the applicant intends to investigate multimodal storytelling interaction addressing innovative research questions the three disciplines corpus linguistics, discourse analysis, and conversation analysis in isolation cannot adequately address. These questions relate to the multimodal resources in the suspension of ordinary turntaking, constructed dialog, story progression, recipient turntaking and climax-projection design.
DFG Programme Research Grants
International Connection United Kingdom
Cooperation Partners Professor Dr. Peter Auer; Dr. Matt Gee
 
 

Additional Information

Textvergrößerung und Kontrastanpassung