Content area
Abstract
Detecting informative tweets is very important to the government or non-government organizations during a disaster. Most of the literature works focused on either text or image separately for getting informative tweets. A very few existing works used multi-modal information such as both image and text to identify the informative tweets. However, the existing works do not give much performance on multi-modal informative tweets. There is a chance to lose useful information in critical times. Hence, we propose a novel approach to identify the multi-modal informative tweets during a disaster. Our proposed method comprises the pre-trained RoBERTa and VGG-16 models to extract the text and image features, respectively. The outputs of these two models are combined using a multiplicative fusion technique. Experiments are conducted on diverse disaster datasets such as Hurricane Maria, Hurricane Harvey, California wildfires, Iraq-Iran earthquake, Hurricane Irma, and Mexico earthquake. Experimental results demonstrated that the proposed method outperforms the existing baseline methods on various parameters.
Details
1 Centific, Hyderabad, India
2 National Institute of Technology, Tiruchirappalli, India (GRID:grid.419653.c) (ISNI:0000 0004 0635 4862)
3 Woosong University, Endicott College of International Studies, Daejeon, South Korea (GRID:grid.457406.4) (ISNI:0000 0004 0590 5343); Jio Platforms Limited, Hyderabad, India (GRID:grid.457406.4)





