Abstract

canonical correlations (M2CCs) framework for subspace learning. In the proposed framework, the input data of each original view are mapped into multiple higher dimensional feature spaces by multiple nonlinear mappings determined by different kernels. This makes M2CC can discover multiple kinds of useful information of each original view in the feature spaces. With the framework, we further provide a specific multi-view feature learning method based on direct summation kernel strategy and its regularized version. The experimental results in visual recognition tasks demonstrate the effectiveness and robustness of the proposed method.

Details

Title
Learning multi-kernel multi-view canonical correlations for image recognition
Author
Yun-Hao, Yuan 1 ; Li, Yun 2 ; Liu, Jianjun 3 ; Chao-Feng, Li 3 ; Xiao-Bo, Shen 4 ; Zhang, Guoqing 5 ; Quan-Sen, Sun 5 

 Yangzhou University, Department of Computer Science, College of Information Engineering, Yangzhou, China (GRID:grid.268415.c); Jiangnan University, Department of Computer Science, Wuxi, China (GRID:grid.258151.a) (ISNI:0000000107081323) 
 Yangzhou University, Department of Computer Science, College of Information Engineering, Yangzhou, China (GRID:grid.268415.c) 
 Jiangnan University, Department of Computer Science, Wuxi, China (GRID:grid.258151.a) (ISNI:0000000107081323) 
 Nanjing University of Science and Technology, School of Computer Science, Nanjing, China (GRID:grid.410579.e) (ISNI:0000000091169901); the University of Queensland, School of Information Technology and Electrical Engineering, Brisbane QLD, Australia (GRID:grid.1003.2) (ISNI:0000000093207537) 
 Nanjing University of Science and Technology, School of Computer Science, Nanjing, China (GRID:grid.410579.e) (ISNI:0000000091169901) 
Pages
153-162
Publication year
2016
Publication date
Jun 2016
Publisher
Springer Nature B.V.
ISSN
20960433
e-ISSN
20960662
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2407020284
Copyright
© The Author(s) 2016. This work is published under https://creativecommons.org/licenses/by/4.0 (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.