Sarcasm detection with BERT Articles
Overview
published in
publication date
- September 2021
start page
- 13
end page
- 25
issue
- 67
Digital Object Identifier (DOI)
full text
International Standard Serial Number (ISSN)
- 1135-5948
Electronic International Standard Serial Number (EISSN)
- 1989-7553
abstract
- Sarcasm is often used to humorously criticize something or hurt someone's feelings. Humans often have difficulty in recognizing sarcastic comments since we say the opposite of what we really mean. Thus, automatic sarcasm detection in textual data is one of the most challenging tasks in Natural Language Processing (NLP). It has also become a relevant research area due to its importance in the improvement of sentiment analysis. In this work, we explore several deep learning models such as Bidirectional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT) to address the task of sarcasm detection. While most research has been conducted using social media data, we evaluate our models using a news headlines dataset. To the best of our knowledge, this is the first study that applies BERT to detect sarcasm in texts that do not come from social media. Experiment results show that the BERT-based approach overcomes the state-of-the-art on this type of dataset.
Classification
subjects
- Computer Science
keywords
- sarcasm detection; deep learning; bilstm; bert