Content area

Abstract

Choosing a suitable deep learning architecture for multimodal data fusion is a challenging task, as it requires the effective integration and processing of diverse data types, each with distinct structures and characteristics. In this paper, we introduce MixMAS, a novel framework for sampling-based mixer architecture search tailored to multimodal learning. Our approach automatically selects the optimal MLP-based architecture for a given multimodal machine learning (MML) task. Specifically, MixMAS utilizes a sampling-based micro-benchmarking strategy to explore various combinations of modality-specific encoders, fusion functions, and fusion networks, systematically identifying the architecture that best meets the task's performance metrics.

Details

1009240
Identifier / keyword
Title
MixMAS: A Framework for Sampling-Based Mixer Architecture Search for Multimodal Fusion and Learning
Publication title
arXiv.org; Ithaca
Publication year
2024
Publication date
Dec 24, 2024
Section
Computer Science
Publisher
Cornell University Library, arXiv.org
Source
arXiv.org
Place of publication
Ithaca
Country of publication
United States
University/institution
Cornell University Library arXiv.org
e-ISSN
2331-8422
Source type
Working Paper
Language of publication
English
Document type
Working Paper
Publication history
 
 
Online publication date
2024-12-25
Milestone dates
2024-12-24 (Submission v1)
Publication history
 
 
   First posting date
25 Dec 2024
ProQuest document ID
3149106835
Document URL
https://www.proquest.com/working-papers/mixmas-framework-sampling-based-mixer/docview/3149106835/se-2?accountid=208611
Full text outside of ProQuest
Copyright
© 2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2024-12-26
Database
ProQuest One Academic