Abstract

We consider a general regularised interpolation problem for learning a parameter vector from data. The well known representer theorem says that under certain conditions on the regulariser there exists a solution in the linear span of the data points. This is at the core of kernel methods in machine learning as it makes the problem computationally tractable. Necessary and sufficient conditions for differentiable regularisers on Hilbert spaces to admit a representer theorem have been proved. We extend those results to nondifferentiable regularisers on uniformly convex and uniformly smooth Banach spaces. This gives a (more) complete answer to the question when there is a representer theorem. We then note that for regularised interpolation in fact the solution is determined by the function space alone and independent of the regulariser, making the extension to Banach spaces even more valuable.

Details

Title
When is there a representer theorem?
Author
Schlegel, Kevin 1   VIAFID ORCID Logo 

 Mathematical Institute, University of Oxford, Oxford, UK 
Pages
401-415
Publication year
2019
Publication date
Jun 2019
Publisher
Springer Nature B.V.
ISSN
09255001
e-ISSN
15732916
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2205097595
Copyright
Journal of Global Optimization is a copyright of Springer, (2019). All Rights Reserved., © 2019. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.