Use este identificador para citar ou linkar para este item: http://www.repositorio.ufop.br/jspui/handle/123456789/14751
Título: A multimodal LIBRAS-UFOP Brazilian sign language dataset of minimal pairs using a microsoft Kinect sensor.
Autor(es): Ramírez Cerna, Lourdes
Escobedo Cárdenas, Edwin Jonathan
Miranda, Dayse Garcia
Gomes, David Menotti
Cámara Chávez, Guillermo
Palavras-chave: Sign language recognition
Dynamic images
RGB-D data
Data do documento: 2021
Referência: RAMÍREZ CERNA, L. et al. A multimodal LIBRAS-UFOP Brazilian sign language dataset of minimal pairs using a microsoft Kinect sensor. Expert Systems With Applications, v. 167, artigo 114179, 2021. Disponível em: <https://www.sciencedirect.com/science/article/abs/pii/S0957417420309143>. Acesso em: 25 ago. 2021.
Resumo: Sign language recognition has made significant advances in recent years. Many researchers show interest in encouraging the development of different applications to simplify the daily life of deaf people and to integrate them into the hearing society. The use of the Kinect sensor (developed by Microsoft) for sign language recognition is steadily increasing. However, there are limited publicly available RGB-D and skeleton joint datasets that provide complete information for dynamic signs captured by Kinect sensor, most of them lack effective and accurate labeling or stored in a single data format. Given the limitations of existing datasets, this article presents a challenging public dataset, named LIBRAS-UFOP. The dataset is based on the concept of minimal pairs, which follows specific categorization criteria; the signs are labeled correctly, and validated by an expert in sign language; the dataset presents complete RGB-D and skeleton data. The dataset consists of 56 different signs with high similarity grouped into four categories. Besides, a baseline method is presented that consists of the generation of dynamic images from each multimodal data, which are the input to two flow stream CNN architectures. Finally, we propose an experimental protocol to conduct evaluations on the proposed dataset. Due to the high similarity between signs, the experimental results using a baseline method reports a recognition rate of 74.25% on the proposed dataset. This result highlights how challenging this dataset is for sign language recognition and let room for future research works to improve the recognition rate.
URI: http://www.repositorio.ufop.br/jspui/handle/123456789/14751
Link para o artigo: https://www.sciencedirect.com/science/article/abs/pii/S0957417420309143
DOI: https://doi.org/10.1016/j.eswa.2020.114179
ISSN: 0957-4174
Aparece nas coleções:DELET - Artigos publicados em periódicos

Arquivos associados a este item:
Arquivo Descrição TamanhoFormato 
ARTIGO_MultimodalLibrasUFOP.pdf
  Restricted Access
3,23 MBAdobe PDFVisualizar/Abrir


Os itens no repositório estão protegidos por copyright, com todos os direitos reservados, salvo quando é indicado o contrário.