Forschungsbericht 2017



Sign Language and Mouth Gesture Recognition

Institut: E-2
Projektleitung: Rolf-Rainer Grigat
Stellvertretende Projektleitung: Maren Brumm
Mitarbeiter/innen: Maren Brumm
Laufzeit: 01.10.2016 — 30.09.2019
Finanzierung:Universität Hamburg
URL: https://www.tuhh.de/alt/bvs/research/current-research/computer-vision.html#c78497

In sign languages there are two different kinds of mouth patterns. Mouthing is derived from a spoken language and its detection is therefore very similar to lip reading. Mouth gestures on the other hand have formed from within the sign languages and bear no relation to spoken languages [1].

In this project we aim at automatic detection and classification of mouth gestures in German sign language to facilitate the research on usage of mouth gestures and advance on the open problem of reliable automatic sign language recognition.

To do so we are developing a deep neural network that can be trained to learn the spatiotemporal features of mouth gestures.

[1] Keller, J. (2001). Multimodal Representation and the Linguistic Status of Mouthings in German Sign Language (DGS). The Hands are the Head of the Mouth: The Mouth as Articulator in Sign Languages (S. 191-230). Hamburg: Signum Verlag.

Stichworte

  • Neural Networks
  • Automatic Sign Language Recognition
  • Deep Learning
  • Machine Learning
  • Mouth Gestures