Full text

Turn on search term navigation

© 2018. This work is published under https://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Automatic human facial recognition has been an active reasearch topic with various potential applications. In this paper, we propose effective multi-task deep learning frameworks which can jointly learn representations for three tasks: smile detection, emotion recognition and gender classification. In addition, our frameworks can be learned from multiple sources of data with different kinds of task-specific class labels. The extensive experiments show that our frameworks achieve superior accuracy over recent state-of-the-art methods in all of three tasks on popular benchmarks. We also show that the joint learning helps the tasks with less data considerably benefit from other tasks with richer data.

Alternate abstract:

Razvita je izvirna metoda globokih nevronskih mrež za tri hkratne naloge: prepoznavanje smeha, ˇcustev in spola.

Details

Title
Effective Deep Multi-source Multi-task Learning Frameworks for Smile Detection, Emotion Recognition and Gender Classification
Author
Sang, Dinh Viet; Cuong, Le Tran Bao
Pages
345-356
Publication year
2018
Publication date
Sep 2018
Publisher
Slovenian Society Informatika / Slovensko drustvo Informatika
ISSN
03505596
e-ISSN
18543871
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2133766966
Copyright
© 2018. This work is published under https://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.