It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Deep learning is effective at large scales due to advances in self-supervised learning, a paradigm that encompasses a broad class of training algorithms capable of learning useful and informative representations end-to-end without explicit data labels. Despite their success, many widespread deep learning models are still limited in their ability to generalize to new tasks and data distributions, and often require large amounts of labeled data to achieve good performance. In this thesis, we explore ways to use regularization in order to improve deep learning models and prevent collapse in their hidden representations, an undesirable scenario in which a network learns trivial or uninformative representations. Throughout the various included works, we take the lens of energy-based models as the learning framework where we apply our regularization techniques. In the first part of the thesis, we focus on sparse coding, a classic self-supervised algorithm for extracting image representations. We extend the original sparse coding algorithm to incorporate a non-linear decoder, then evaluate on different tasks including image classification in the low-data regime. In the second part of the thesis, we focus on building world models through self-supervised video representation learning using joint-embedding predictive architectures as an alternative to generative predictive models. Our study suggests that our approach yields more information-rich video representations. Finally, we present research on improving video representations through variance and covariance regularization in the setting of supervised transfer learning. We hope that our findings spur new research into using regularization techniques to prevent collapse both in the current and in the next generation of deep learning architectures.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer