Content area

Abstract

This paper concentrates on a parallel acceleration method of optimizing Gaussian hyper-parameters with the maximum likelihood estimation. In the process of optimizing the hyper-parameters, many calculations of the kernel matrix inversion will be operated. With an increase of the kernel matrix scale, the high computation burden will be generated. In order to improve the calculating efficiency, we introduce a decomposing and iterative (DI) algorithm. This algorithm divides the large-scale kernel matrix into four blocks and solves the matrix inversion with constant iterations. Due to the independency of the calculations of the sub-matrix blocks, it is quite suitable to put the sub-matrix blocks computation in graphics processing unit. Hence, the parallel decomposing and iterative (DIP) algorithm is introduced. The inverted pendulum and ball-plate system experiments are carried out to confirm the effectiveness of the DI and DIP algorithms. Based on the simulation results, the proposed DI and DIP algorithms shed light on real engineering application in the future. This paper also provides a practical and feasible approach to accelerate the optimization of hyper-parameters with maximum likelihood estimation.

Details

Title
Optimization of kernel learning algorithm based on parallel architecture
Author
Lu, Li 1 ; Chen, Xin 1   VIAFID ORCID Logo 

 China University of Geosciences, School of Automation, Wuhan, China (GRID:grid.503241.1) (ISNI:0000 0004 1760 9015); Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan, China (GRID:grid.503241.1) 
Pages
1881-1907
Publication year
2020
Publication date
Aug 2020
Publisher
Springer Nature B.V.
ISSN
0010485X
e-ISSN
14365057
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2289870984
Copyright
© Springer-Verlag GmbH Austria, part of Springer Nature 2019.