Home // eKNOW 2024, The Sixteenth International Conference on Information, Process, and Knowledge Management // View article


Intermediate-Task Transfer Learning: Leveraging Sarcasm Detection for Stance Detection

Authors:
Gibson Nkhata
Susan Gauch

Keywords: Stance detection; sarcasm detection; transfer learning; BERT; RoBERTa.

Abstract:
Stance Detection (SD) in the context of social media has emerged as a prominent area of interest with implications for social, business, and political applications, thereby garnering escalating research attention within the realm of Natural Language Processing (NLP). The inherent subtlety, nuance, and complexity of texts procured from online platforms via crowdsourcing pose challenges for SD algorithms in accurately discerning the author’s stance. Particularly, the inclusion of sarcastic and figurative language drastically impacts the performance of SD models. This paper addresses this challenge by employing sarcasm detection intermediate-task transfer learning tailored for SD. The proposed methodology involves the fine-tuning of BERT and RoBERTa and the sequential concatenation of convolutional, bidirectional LSTM, and dense layers. Rigorous experiments are conducted on publicly available benchmark datasets to evaluate our transfer-learning framework. The performance of the approach is assessed against various state-of-the-art (SOTA) baselines for SD, providing empirical evidence of its effectiveness. Notably, our model outperforms the best SOTA models, achieving average F1-score gaps of 0.038 and 0.053 on the SemEval 2016 Task 6A Dataset (SemEval) and Multi-Perspective Consumer Health Query Data (MPCHI), respectively, even prior to sarcasm detection pre-training. The integration of sarcasm knowledge into the model proves instrumental in mitigating misclassifications of sarcastic textual elements in SD. Our model accurately predicts 85% of texts that were previously misclassified by the model without sarcasm-detection pre-training, thereby amplifying the average F1-score of the model. Furthermore, our experiments revealed that the success of the transfer-learning framework is contingent upon the correlation of lexical attributes between the intermediate task (sarcasm detection) and the target task (SD). This study represents the first exploration of sarcasm detection as an intermediate transfer-learning task in the context of SD and simultaneously exploits the concatenation of BERT or RoBERTa with other deep-learning techniques, establishing the proposed approach as a foundational baseline for future research endeavors in this domain.

Pages: 7 to 14

Copyright: Copyright (c) IARIA, 2024

Publication date: May 26, 2024

Published in: conference

ISSN: 2308-4375

ISBN: 978-1-68558-165-7

Location: Barcelona, Spain

Dates: from May 26, 2024 to May 30, 2024