Abstract

Multi-Task Learning (MTL) is a powerful technique that has gained popularity due to its performance improvement over traditional Single-Task Learning (STL). However, MTL is often challenging because there is an exponential number of possible task groupings, which can make it difficult to choose the best one because some groupings might produce performance degradation due to negative interference between tasks. That is why existing solutions are severely suffering from scalability issues, limiting any practical application. In our paper, we propose a new data-driven method that addresses these challenges and provides a scalable and modular solution for classification task grouping based on a re-proposed data-driven features, Data Maps, which capture the training dynamics for each classification task during the MTL training. Through a theoretical comparison with other techniques, we manage to show that our approach has the superior scalability. Our experiments show a better performance and verify the method’s effectiveness, even on an unprecedented number of tasks (up to 100 tasks on CIFAR100). Being the first to work on such number of tasks, our comparisons on the resulting grouping shows similar grouping to the mentioned in the dataset, CIFAR100. Finally, we provide a modular implementation3

https://github.com/ammarSherif/STG-MTL.

for easier integration and testing, with examples from multiple datasets and tasks.

Details

Title
STG-MTL: scalable task grouping for multi-task learning using data maps
Author
Ammar Sherif 1   VIAFID ORCID Logo  ; Abubakar Abid 2 ; Elattar, Mustafa 1   VIAFID ORCID Logo  ; ElHelw, Mohamed 1 

 Nile University , El Sheikh Zayed, Egypt 
 Hugging Face , 20 Jay Street, Brooklyn, NY 11201, United States of America 
First page
025068
Publication year
2024
Publication date
Jun 2024
Publisher
IOP Publishing
e-ISSN
26322153
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3068452770
Copyright
© 2024 The Author(s). Published by IOP Publishing Ltd. This work is published under http://creativecommons.org/licenses/by/4.0 (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.