Content area

Abstract

As the advance of social networks, the emergency of fake news has been the major threat for information security, privacy, and trustworthiness. The fake news can leverage multimedia contents to fabricate evidences or mislead readers, which damages a lot in machine learning and network systems. In this work, we explored the task of multimodal fake news detection. The major challenge of fake news detection stems from the modality fusion by abundant information. Overcoming the limitations of the current models, we tackle the challenge of learning corrections between modalities in news, and substantially proposed a mutual attention neural network (MANN) that can learn the relationship between each different modality. Our model consists of four components: multimodal feature extractor, mutual attention fusion, fake news detector and irrelevant event discriminator. The performance of our proposed architecture is evaluated on Weibo dataset, which indicates the MANN model outperforms the state-of-the-arts.

Details

Title
A mutual attention based multimodal fusion for fake news detection on social network
Author
Guo, Ying 1 

 North China University of Technology, Department of Computer Science, Beijing, People’s Republic of China (GRID:grid.440852.f) (ISNI:0000 0004 1789 9542) 
Pages
15311-15320
Publication year
2023
Publication date
Jun 2023
Publisher
Springer Nature B.V.
ISSN
0924669X
e-ISSN
1573-7497
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2821148224
Copyright
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.