Content area
Full Text
Mach Learn (2013) 90:161189
DOI 10.1007/s10994-012-5310-y
Liu Yang Steve Hanneke Jaime Carbonell
Received: 29 November 2010 / Revised: 3 March 2012 / Accepted: 18 June 2012 / Published online: 7 July 2012 The Author(s) 2012
Abstract We explore a transfer learning setting, in which a nite sequence of target concepts are sampled independently with an unknown distribution from a known family. We study the total number of labeled examples required to learn all targets to an arbitrary spec-ied expected accuracy, focusing on the asymptotics in the number of tasks and the desired accuracy. Our primary interest is formally understanding the fundamental benets of transfer learning, compared to learning each target independently from the others. Our approach to the transfer problem is general, in the sense that it can be used with a variety of learning protocols. As a particularly interesting application, we study in detail the benets of transfer for self-verifying active learning; in this setting, we nd that the number of labeled examples required for learning with transfer is often signicantly smaller than that required for learning each target independently.
Keywords Transfer learning Multi-task learning Active learning Statistical learning
theory Bayesian learning Sample complexity
1 Introduction
Transfer learning reuses knowledge from past related tasks to ease the process of learning to perform a new task. The goal of transfer learning is to leverage previous learning and ex-
Editor: Tong Zhang.
L. Yang ( )
Machine Learning Department, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USAe-mail: mailto:[email protected]
Web End [email protected]
S. Hanneke
Department of Statistics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA e-mail: [email protected]
J. Carbonell
Language Technologies Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USAe-mail: mailto:[email protected]
Web End [email protected]
A theory of transfer learning with applications to active learning
162 Mach Learn (2013) 90:161189
perience to more efciently learn novel, but related, concepts, compared to what would be possible without this prior experience. The utility of transfer learning is typically measured by a reduction in the number of training examples required to achieve a target performance on a sequence of related learning problems, compared to the number required for unrelated problems: i.e., reduced sample complexity. In many real-life scenarios, just a few training examples of a...