Please use this identifier to cite or link to this item:
Title: A multimodal LIBRAS-UFOP Brazilian sign language dataset of minimal pairs using a microsoft Kinect sensor.
Authors: Ramírez Cerna, Lourdes
Escobedo Cárdenas, Edwin Jonathan
Miranda, Dayse Garcia
Gomes, David Menotti
Cámara Chávez, Guillermo
Keywords: Sign language recognition
Dynamic images
RGB-D data
Issue Date: 2021
Citation: RAMÍREZ CERNA, L. et al. A multimodal LIBRAS-UFOP Brazilian sign language dataset of minimal pairs using a microsoft Kinect sensor. Expert Systems With Applications, v. 167, artigo 114179, 2021. Disponível em: <>. Acesso em: 25 ago. 2021.
Abstract: Sign language recognition has made significant advances in recent years. Many researchers show interest in encouraging the development of different applications to simplify the daily life of deaf people and to integrate them into the hearing society. The use of the Kinect sensor (developed by Microsoft) for sign language recognition is steadily increasing. However, there are limited publicly available RGB-D and skeleton joint datasets that provide complete information for dynamic signs captured by Kinect sensor, most of them lack effective and accurate labeling or stored in a single data format. Given the limitations of existing datasets, this article presents a challenging public dataset, named LIBRAS-UFOP. The dataset is based on the concept of minimal pairs, which follows specific categorization criteria; the signs are labeled correctly, and validated by an expert in sign language; the dataset presents complete RGB-D and skeleton data. The dataset consists of 56 different signs with high similarity grouped into four categories. Besides, a baseline method is presented that consists of the generation of dynamic images from each multimodal data, which are the input to two flow stream CNN architectures. Finally, we propose an experimental protocol to conduct evaluations on the proposed dataset. Due to the high similarity between signs, the experimental results using a baseline method reports a recognition rate of 74.25% on the proposed dataset. This result highlights how challenging this dataset is for sign language recognition and let room for future research works to improve the recognition rate.
ISSN: 0957-4174
Appears in Collections:DELET - Artigos publicados em periódicos

Files in This Item:
File Description SizeFormat 
  Restricted Access
3,23 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.