Modeling dialogue implies detecting natural interaction. A pragmatic approach allows to consider the linguistic act composed of several and different features interacting with each other. Data collected for this project comprises three different genres of communication: monological, dialogical and conversational. The project aims to identify and analyze the pragmatic value of multimodal communication spotting the linguistic actions which carry out illocution values. We draw a pragmatic approach to study multimodal interaction combining the L-AcT annotation (Cresti 2000) with the gesture’s architecture designed by Kendon (Kendon 2004). The annotation system is designed to divide the speech units (utterance, intonation units and illocution types) (Hart, Collier, and Cohen 2006) (Cresti 2005) (Moneglia and Raso 2014) from gestural units (Gesture Unit, Gesture Phrase, Gesture Phase). Keeping the Gesture Unit as a superior macrounit at the other gestural units only for the quantitative purpose, we realize a matching between gesture and speech units. These units work together to form the communicative intention of the speaker that can be recognizable by the Illocution Type. This annotation system leads to understanding how speakers realize multimodal linguistic actions and how different modalities work.

Prosody and gestures to modelling multimodal interaction: constructing an Italian pilot corpus / Luca Lo Re. - In: IJCOL. - ISSN 2499-4553. - ELETTRONICO. - 7:(2021), pp. 33-44. [10.4000/ijcol.819]

Prosody and gestures to modelling multimodal interaction: constructing an Italian pilot corpus

Luca Lo Re
2021

Abstract

Modeling dialogue implies detecting natural interaction. A pragmatic approach allows to consider the linguistic act composed of several and different features interacting with each other. Data collected for this project comprises three different genres of communication: monological, dialogical and conversational. The project aims to identify and analyze the pragmatic value of multimodal communication spotting the linguistic actions which carry out illocution values. We draw a pragmatic approach to study multimodal interaction combining the L-AcT annotation (Cresti 2000) with the gesture’s architecture designed by Kendon (Kendon 2004). The annotation system is designed to divide the speech units (utterance, intonation units and illocution types) (Hart, Collier, and Cohen 2006) (Cresti 2005) (Moneglia and Raso 2014) from gestural units (Gesture Unit, Gesture Phrase, Gesture Phase). Keeping the Gesture Unit as a superior macrounit at the other gestural units only for the quantitative purpose, we realize a matching between gesture and speech units. These units work together to form the communicative intention of the speaker that can be recognizable by the Illocution Type. This annotation system leads to understanding how speakers realize multimodal linguistic actions and how different modalities work.
2021
7
33
44
Luca Lo Re
File in questo prodotto:
File Dimensione Formato  
ijcol-819_LoRe.pdf

accesso aperto

Tipologia: Pdf editoriale (Version of record)
Licenza: Open Access
Dimensione 414.02 kB
Formato Adobe PDF
414.02 kB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1403084
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact