It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
The role of social media in crisis response and recovery is becoming increasingly prominent due to the rapid progression of information and communication technologies. This study introduces a transformative approach to extract valuable information from the enormous volume of user-generated content on social media, specifically focusing on tweets that can significantly aid emergency response and recovery efforts. The identification of informative tweets allows emergency personnel to gain a more comprehensive understanding of crisis situations, thereby facilitating the deployment of more effective recovery strategies. Previous studies have largely focused on either the textual content or the accompanying visual elements within tweets. However, evidence suggests a complementary relationship between text and visuals, offering an opportunity for synergistic insights. In response to this, a novel deep learning framework is proposed, which concurrently analyses both textual and visual components extracted from user-generated tweets. The central architecture integrates established methodologies, including RoBERTa for text analysis, Vision Transformer for image understanding, Bi-LSTM for sequence processing, and an attention mechanism for context awareness. The innovation of this approach lies in its emphasis on multimodal fusion, introducing rank fusion techniques to effectively combine the strengths of textual and visual inputs. The proposed methodology is extensively tested across seven diverse datasets, representing various natural disasters such as wildfires, hurricanes, earthquakes, and floods. The experimental results demonstrate a superior performance of the proposed system, compared to several existing methods, with accuracy levels ranging from 94% to 98%. These findings underscore the efficacy of the proposed deep learning classifier in leveraging interactions across multiple modalities. In summary, this study contributes to disaster management by promoting a comprehensive approach that exploits the potential of multimodal data, thereby enhancing decision-making processes in emergency scenarios.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer