Multimodal Constructional Space (GRK 2839 Project 2)

Third Party Funds Group - Sub project


Acronym: GRK 2839 Project 2

Start date : 01.10.2022

End date : 30.09.2027

Website: https://www.cxg.phil.fau.eu/about-the-rtg/about-the-rtg-projects/project-2/


Overall project details

Overall project

GRK 2839: Die Konstruktionsgrammatische Galaxis (GRK 2839) Oct. 1, 2022 - Sept. 30, 2027

Project details

Scientific Abstract

When it comes to human face-to-face communication, speakers make use of various modalities to deliver and interpret messages. This complex operation involves not only the verbal exchange of linguistic forms, but also the use of facial expressions, gestures, and prosody. In fact, a number of studies have shown that many gestures and linguistic forms systematically co-occur with one another (Cienki, 2015; Ningelgen & Auer, 2017; Ziem, 2017; Zima, 2017b). Following a cognitive/usage-based model, we know that language learners and users keep track of usage events, and that knowledge of language is constantly shaped and re-shaped with each instance of use (Bybee, 2010). One of the challenges of linguistic theory is therefore to account for these multimodal phenomena.

The main focus of this project is on modeling multimodality in a Construction Grammar (CxG) framework (Goldberg, 1995, 2006, 2019). Over the past few years, there have been several proposals that address these types of phenomena (Cienki, 2017; Herbst, 2020; Hoffmann, 2017; Mittelberg, 2017; Schoonjans, 2017; Turner, 2018; 2020a; 2020b; Uhrig, 2021; Ziem, 2017; Zima, 2017; Zima and Bergs 2017). Still, there are various theoretical and practical aspects yet to be addressed. Some of the discussion points that were brought up concern the theoretical status of multimodal constructions, whether the constructicon is multimodal, and whether a Multimodal Construction Grammar is needed.

In detail, this project is set to investigate three kinds of constructions where multimodal phenomena are observed, and use them as case studies to understand and suggest ways of modeling multimodality, following a CxG framework. These are cases whereby a linguistic form is observed to systematically and frequently co-occur with a gesture as in “I came this close to 🤏 winning the lottery”; a gesture that seemingly has a syntactic role in the utterance; and a gesture where no association is found with particular linguistic forms, but rather is a case of free combination such as air quotes (Uhrig, 2020). All of this will be done in the form of corpus-based studies. The data will be extracted from the large repository of audio-visual data NewsScape English Corpus, and analyzed using a variety of tools such as CQPweb, Red Hen Rapid Annotator, Elan, and Praat (Uhrig, 2021).

By the end of this project, we would like to suggest ways of delineating and modeling multimodal constructions in a CxG framework, account for the type of information that should be considered when describing constructions, which is not a trivial matter, and apply data science methods to multimodal communication research to identify and extract gestures from multimodal corpora, and to use statistical methods to analyze them.

Involved:

Contributing FAU Organisations:

Funding Source