Content area
Abstract
Sarcasm is the acerbic use of words to mock someone or something, mostly in a satirical way. Scandal or mockery is used harshly, often crudely and contemptuously, for destructive purposes in sarcasm. To extract the actual sentiment of a sentence for code-mixed language is complex because of the unavailability of sufficient clues for sarcasm. In this work, we proposed a model consisting of Bidirectional Encoder Representations from Transformers (BERT) stacked with Long Short Term Memory (LSTM) (BERT-LSTM). A pre-trained BERT model is used to create embedding for the code-mixed dataset. These embedding vectors were used by an LSTM network consisting of a single layer to identify the nature of a sentence, i.e., sarcastic or non-sarcastic. The experiments show that the proposed BERT-LSTM model detects sarcastic sentences more effectively compared to other models on the code-mixed dataset, with an improvement of up to 6 % in terms of F1-score.
Details
1 National Institute of Technology Patna, Department of Computer Science and Engineering, Patna, India (GRID:grid.444650.7) (ISNI:0000 0004 1772 7273)





