Abstract

Multi-view clustering (MVC), which aims to explore the underlying structure of data by leveraging heterogeneous information of different views, has brought along a growth of attention. Multi-view clustering algorithms based on different theories have been proposed and extended in various applications. However, most existing MVC algorithms are shallow models, which learn structure information of multi-view data by mapping multi-view data to low-dimensional representation space directly, ignoring the nonlinear structure information hidden in each view, and thus, the performance of multi-view clustering is weakened to a certain extent. In this paper, we propose a deep multi-view clustering algorithm based on multiple auto-encoder, termed MVC-MAE, to cluster multi-view data. MVC-MAE adopts auto-encoder to capture the nonlinear structure information of each view in a layer-wise manner and incorporate the local invariance within each view and consistent as well as complementary information between any two views together. Besides, we integrate the representation learning and clustering into a unified framework, such that two tasks can be jointly optimized. Extensive experiments on six real-world datasets demonstrate the promising performance of our algorithm compared with 15 baseline algorithms in terms of two evaluation metrics.

Details

Title
Deep Multiple Auto-Encoder-Based Multi-view Clustering
Author
Du Guowang 1   VIAFID ORCID Logo  ; Zhou, Lihua 1 ; Yang, Yudi 1 ; Lü, Kevin 2 ; Wang, Lizhen 1 

 Yunnan University, School of Information Science and Engineer, Kunming, P.R. China (GRID:grid.440773.3) (ISNI:0000 0000 9342 2456) 
 Brunel University, Uxbridge, UK (GRID:grid.7728.a) (ISNI:0000 0001 0724 6933) 
Pages
323-338
Publication year
2021
Publication date
Sep 2021
Publisher
Springer Nature B.V.
e-ISSN
2364-1541
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2556557459
Copyright
© The Author(s) 2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.