A shape-aware retargeting approach to transfer human motion and appearance in monocular videos.

dc.contributor.authorGomes, Thiago Luange
dc.contributor.authorMartins, Renato José
dc.contributor.authorFerreira, João Pedro Moreira
dc.contributor.authorAzevedo, Rafael Augusto Vieira de
dc.contributor.authorTorres, Guilherme Alvarenga
dc.contributor.authorNascimento, Erickson Rangel do
dc.date.accessioned2022-09-15T21:20:57Z
dc.date.available2022-09-15T21:20:57Z
dc.date.issued2021pt_BR
dc.description.abstractTransferring human motion and appearance between videos of human actors remains one of the key challenges in Computer Vision. Despite the advances from recent image-to-image translation approaches, there are several transferring contexts where most end-to-end learning-based retargeting methods still perform poorly. Transferring human appearance from one actor to another is only ensured when a strict setup has been complied, which is generally built considering their training regime’s specificities. In this work, we propose a shape-aware approach based on a hybrid image-based rendering technique that exhibits competitive visual retargeting quality compared to state-of-the-art neural rendering approaches. The formulation leverages the user body shape into the retargeting while considering physical constraints of the motion in 3D and the 2D image domain. We also present a new video retargeting benchmark dataset composed of different videos with annotated human motions to evaluate the task of synthesizing people’s videos, which can be used as a common base to improve tracking the progress in the field. The dataset and its evaluation protocols are designed to evaluate retargeting methods in more general and challenging conditions. Our method is validated in several experiments, comprising publicly available videos of actors with different shapes, motion types, and camera setups. The dataset and retargeting code are publicly available to the community at: https://www.verlab.dcc.ufmg.br/retargeting-motion.pt_BR
dc.identifier.citationGOMES, T. L. et al. A shape-aware retargeting approach to transfer human motion and appearance in monocular videos. International Journal of Computer Vision, v. 129, p. 2057-2075, 2021. Disponível em: <https://link.springer.com/article/10.1007/s11263-021-01471-x>. Acesso em: 29 abr. 2022.pt_BR
dc.identifier.doihttps://doi.org/10.1007/s11263-021-01471-xpt_BR
dc.identifier.issn15731405
dc.identifier.urihttp://www.repositorio.ufop.br/jspui/handle/123456789/15317
dc.identifier.uri2https://link.springer.com/article/10.1007/s11263-021-01471-xpt_BR
dc.language.isoen_USpt_BR
dc.rightsrestritopt_BR
dc.subjectMotion retargetingpt_BR
dc.subjectHuman image synthesispt_BR
dc.subjectVideo to video translationpt_BR
dc.subjectImage manipulationpt_BR
dc.titleA shape-aware retargeting approach to transfer human motion and appearance in monocular videos.pt_BR
dc.typeArtigo publicado em periodicopt_BR
Arquivos
Pacote Original
Agora exibindo 1 - 1 de 1
Nenhuma Miniatura disponível
Nome:
ARTIGO_ShapeAwareRetargenting.pdf
Tamanho:
3.81 MB
Formato:
Adobe Portable Document Format
Descrição:
Licença do Pacote
Agora exibindo 1 - 1 de 1
Nenhuma Miniatura disponível
Nome:
license.txt
Tamanho:
1.71 KB
Formato:
Item-specific license agreed upon to submission
Descrição: