Transcribing and comparing sign languages and co-speech gestures: the contribution of the Typannot typographic transcription system to the study of epistemicity - Formes et représentations en linguistique et littérature Accéder directement au contenu
Poster De Conférence Année : 2022

Transcribing and comparing sign languages and co-speech gestures: the contribution of the Typannot typographic transcription system to the study of epistemicity

Transcription et comparaisons entre langues des signes et gestes co-verbaux : the Typannot font

Trascrizione ed confronti tra lingue dei segni e gesti co-verbali; Typannot ed epistemologia

Résumé

To date, comparisons between sign languages (SL) and co-speech gestures (CSG) still face major challenges.One of them is the difficulty to choose, among all the existing transcription systems (TranSys), a single TranSys that could allow researchers to take into account the peculiar characteristics of SL and CSG. Even if the researchers of these two fields describe the same “gesturing body”, they have developed their own representation models (with their own form and scope) to code the features they are interested in. In fact, all these systems adopt a perceptual point of view , i.e. a visuo-spatial representation of the gesture, and are therefore subject to different insight (e.g., in the division of space {1}). This can lead to a fragmentation of the descriptive modalities, limiting the identification of formal and functional common characteristics between SL and CVG. Doing so, researchers currently ignore a rallying perspective allowing to describe both SL and CSG: the body structure. To fill this gap, we have developed Typannot {2}, a typographic TranSys that aims at representing gestures from an articulatory standpoint, using a descriptive approach rooted in the body: the kinesiological approach {3}.Designed for SL but suitable for CSG too, Typannot is a formal TranSys that reproduces the articulatory characteristics of each segment of the body (fingers, hand, forearm, arm; shoulder, torso and hip; neck, head, mouth and eyes). It describes their possibilities of organization from the body articulatory perspective: the positions and the transformations of the degrees of freedom. In Typannot, each articulator (upper-limb, bust, mouth and eyes segments) is associated with a different OpenType font; every character (in Unicode sense) of a Typannot font contains only one specific information (i.e., the degree of extension or abduction) necessary to describe all the features of a segment. An advanced system of typographic ligature allows the user to see a single “holistic” glyph containing every feature, thus assuring searchability in a still readable way.Another challenge faced by LS and CSG TranSys is the time required to transcribe gestural phenomena, which drastically restricts the size of multimodal corpus and therefore the possibility of generalizations. Here again, Typannot deals with this issue by facilitating and automating the work. On one hand, the input interface has been designed to help transcribers produce transcription easily; on the other hand, the articulatory model used in this TranSys can easily be associated with Motion Capture (MoCap) data. This correspondence makes it possible to envisage a (semi)automatic transcription of body structure, drastically minimizing the transcription time.Typannot has been used within the ANR-LexiKHuM project (2021-2025) {4} to transcribe the body postures present in the multimodal and parallel corpus DEGELS1 {5}, consisting of conversations in LSF (≃ 30 min) and in spoken French (≃ 30 min). One of the objectives of LexiKHuM is to identify the epistemic markers common to SL and to CSG, in particular those that are not brachial. Thanks to Typannot, an articulatory description of the upper trunk and of the facial expressions could be implemented out in a coherent manner between the data in SL and CSG. The corpus was transcribed using the ELAN software {6} and the annotation was carried out manually and with the sensor-free MoCap software AlphaPose {7}, allowing a semi-automatic annotation in Typannot characters.In this talk, we will present the kinesiological approach and the Typannot typographic font system; some transcribed data and preliminary results of LexiKHuM will be exposed, also focusing on the time used for manual and semi-automatic transcription.******This presentation is dedicated to the memory of Dominique Boutet (1966-2020), who was co-author of our proposal in 2020 and co-coordinator of the Typannot and the LexiKHuM projects.Bibliography{1} McNeill, D. (1992). Hands and mind: what gestures reveal about thought. Chicago: University of Chicago Press.{2} Danet C., Boutet D., Doan P., Bianchini C.S., Contesse A., Chevrefils L., Rébulard M., Thomas C. & Dauphin J.-F. (2021). Transcribing sign languages with Typannot: a typographic system which retains and displays layers of information. Grapholinguistics and its Applications, 5(2), 1009–1037. https://doi.org/10.36824/2020-graf-dane{3} Boutet D. (2018). Pour une approche kinésiologique de la gestualité: synthèse. Habilitation à diriger des recherches, Université́ de Rouen-Normandie.{4} Catteau F., Morgenstern A. & Bianchini C.S. (2022). From gestures to kinesthetic modality: how to express epistemicity in a haptic device for human-machine interaction. 9th Intl Conf. of Intl. Soc. for Gesture Studies [ISGS 2022].{5} Braffort A. & Boutora L. (2012), DEGELS1: a comparable corpus of French Sign Language and co-speech gestures, Proc. 8th Intl. Conf. on Language Resources and Evaluation [LREC 2012].{6} Wittenburg P., Brugman H., Russel A., Klassmann A. & Sloetjes H. (2006). ELAN: a professional framework for multimodality research. Proc. 5th Intl. Conf. on Language Resources and Evaluation [LREC 2006].{7} Fang H.-S., Xie S., Tai Y.-W. & Lu C. (2018). RMPE: Regional Multi-person Pose Estimation. arXiv:161200137 [cs].
Fichier non déposé

Dates et versions

hal-02394497 , version 1 (04-12-2019)

Identifiants

  • HAL Id : hal-02394497 , version 1

Citer

Claudia S. Bianchini, Claire Danet, Léa Chevrefils, Fanny Catteau, Chloé Thomas, et al.. Transcribing and comparing sign languages and co-speech gestures: the contribution of the Typannot typographic transcription system to the study of epistemicity. Sign CAFE 2: Second international workshop on cognitive and functional explorations in sign language linguistics, Oct 2022, Ragusa, Italy. , 2022. ⟨hal-02394497⟩
216 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More