Toward a typeface for the transcription of facial actions in sign languages - Formes et représentations en linguistique et littérature Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Toward a typeface for the transcription of facial actions in sign languages

Typannot, vers une police de caractères pour la transcrire les expressions du visage des langues des signes

Typannot, verso un set di caratteri per la trascrizione della mimica facciale nelle lingue dei segni

Résumé

Non-manual actions, and more specifically facial actions (FA) can be found in all Sign Languages (SL). Those actions involve all the different facial parts and can have various and intricate linguistic relations with manual signs. Unlike in vocal languages, FA in SL provide more meaning than simple expressions of feelings and emotions. Yet non-manual parameters are among the most unknown formal features in SL studies. During the past 30 years, some studies have started questioning the meanings and linguistic values and the relations between manual and non-manual signs (Crashborn et al. 2008; Crashborn & Bank 2014); more recently, SL corpora have been analysed, segmented, and transcribed to help study FA (Vogst-Svenden 2008; Bergman et al. 2008; Sutton-Spence & Day 2008). Moreover, to fill the lack of an annotation system for FA, a few manual annotation systems have integrated facial glyphs, such as HamNoSys (Prillwitz et al. 1989) and SignWriting (Sutton 1995). On one hand, HamNoSys has been developed to describe all existing SLs at a phonetic level; it allows a formal, linear, highly detailed and searchable description of manual parameters. As for non-manual parameters, HamNoSys offers the replacement of hands by another articulators. Non-manual parameters can be written as “eyes” or “mouth” and described by the same symbols developed for hands (Hanke 2004). Unfortunately only a limited number of manual symbols can be translated into FA and the annotation system remains incomplete. On the other hand, SignWriting describes SL with iconic symbols placed in a 2D space representing the signer’s body. Facial expressions are divided into mouth, eyes, nose, eyebrows, etc., and are drawn in a circular “head” much like emoticons. SignWriting offers a detailed description of posture and actions of non-manual parameters, but does not ensure compatibility with the most common annotation software used by SL linguists (e.g., ELAN). Typannot, a interdisciplinary project led by linguists, designers, and developers, which aims to set up a complete transcription system for SL that includes every SL parameter (handshape, localisation, movement, FA), has developed a different methodology. As mentioned earlier, FA have various linguistic values (mouthings, adverbial mouth gestures, semantically empty, enacting, whole face) and also include prosody and emotional meanings. In this regard, they can be more variable and signer-related than manual parameters. To offer the best annotation tool, Typannot approach has been to define facial parameters and all their possible tangible configurations. The goal is to set up the most efficient, simple, yet complete and universal formula to describe all possible FA. This formula is based on a 3-dimensional grid. Indeed all the different configurations of FA can be described by its X, Y, Z axis position. As a result, all FA can be described and encoded using a restricted list of 39 qualifiers. Based on this model and to help reduce the annotation process, a set of generic glyphs has been developed. Each qualifier has its own symbolic “generic” glyph. This methodical decomposition of all facial components enables a precise and accurate transcription of a complex FA using only a few glyphs. This formula and its generic glyphs have gone through a series of tests and revisions. Recently, an 18m20s long FA corpus of two deaf signers has been recorded using two different cameras. The first one, RGB HQ, is used to capture a high quality image and the second, infrared Kinect, is used to captured the depth. The latter was linked with Brekel Proface 2 (Leong et al. 2015), a 3D animation software that enables an automatic recognition of FA. This corpus has been fully annotated using Typannot generic glyphs. These annotations have enabled the validation of the general structure of Typannot FA formula and to identify some minor corrections to be made. For instance, it has been shown that the description of the air used to puff out or suck in cheeks is too restrictive and the description of the opening and closure of the eyelids is too unnecessarily precise. When those changes are implemented, our next task will be to develop a morphological glyphic system that will combine the different generic glyphs used for each facial parameter into one unique morphological glyph. This means that for any given FA, all the information contained in Typannot descriptive formula will be contained within one legible glyph. Some early research and work has already begun on this topic, but needs further development before providing a statement on its typographic structure. When this system is completed, it will be released with its own virtual keyboard (Typannot Keyboard, currently in development for handshapes) to help transcription and improve annotation processes. Bibliography: -Chételat-Pelé, E. (2010). Les Gestes Non Manuels en Langue des Signes Française ; Annotation, analyse et formalisation : application aux mouvements des sourcils et aux clignements des yeux. Université de Provence – Aix-Marseille I. -Crasborn, O., van der Kooij, E., Waters, D., Woll, B., & Mesch, J. (2008). Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Language & Linguistics, 11(1), 45-67. -Crasborn, O.A., & Bank, R. (2014). An annotation scheme for the linguistic study of mouth actions in sign languages. http://repository.ubn.ru.nl/handle/2066/132960 -Fontana, S. (2008). Mouth actions as gesture in sign language. Gesture, 8(1), 104-123. -Hanke, T. (2004). HamNoSys - Representing sign language data in language resources and language processing contexts. In Workshop on the Representation and Processing of Sign Languages on the occasion of the Fourth International Conference on Language Resources and Evaluation (p. 1-6). -Leong, C.W., Chen, L., Feng, G., Lee, C.M., & Mulholland, M. (2015). Utilizing depth sensors for analyzing multimodal presentations: Hardware, software and toolkits (p. 547-556). Proceedings of the 2015 ACM International Conference on Multimodal Interaction, ACM. -Prillwitz, S., Leven, R., Zienert, H., Hanke, T., & Henning, J. (1989). Hamburg notation system for sign languages: an introductory guide. Signum Press, Hamburg. -Sandler, W. (2009). Symbiotic symbolization by hand and mouth in sign language. Semiotica, 2009(174), 241-275. http://doi.org/10.1515/semi.2009.035 -Sutton, V. (1995). Lessons in Sign Writing: textbook. DAC, La Jolla (CA). -Sutton-Spence, R., & Boyes-Braem, P. (2001). The hands are the head of the mouth: the mouth as articulator in sign languages. Signum Press, Hamburg

Domaines

Linguistique
Fichier principal
Vignette du fichier
4C C049 2019-poster SNM2 Graz.pdf (2.86 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02342442 , version 1 (16-05-2020)

Identifiants

  • HAL Id : hal-02342442 , version 1

Citer

Adrien Contesse, Chloé Thomas, Claudia S. Bianchini, Claire Danet, Patrick Doan, et al.. Toward a typeface for the transcription of facial actions in sign languages. Workshop "SignNonManuals 2", May 2019, Graz, Austria. ⟨hal-02342442⟩
547 Consultations
24 Téléchargements

Partager

Gmail Facebook X LinkedIn More