The Impact of Pretrained Language Models on Negation and Speculation Detection in Cross-Lingual Medical Text: Comparative Study
Articles
Overview
published in
- JMIR Medical Informatics Journal
publication date
- December 2020
start page
- 1
end page
- 21
issue
- 12
volume
- 8
Digital Object Identifier (DOI)
International Standard Serial Number (ISSN)
- 2291-9694
abstract
- Negation and speculation are critical elements in natural language processing (NLP)-related tasks, such as information extraction, as these phenomena change the truth value of a proposition. In the clinical narrative that is informal, these linguistic facts are used extensively with the objective of indicating hypotheses, impressions, or negative findings. Previous state-of-the-art approaches addressed negation and speculation detection tasks using rule-based methods, but in the last few years, models based on machine learning and deep learning exploiting morphological, syntactic, and semantic features represented as spare and dense vectors have emerged. However, although such methods of named entity recognition (NER) employ a broad set of features, they are limited to existing pretrained models for a specific domain or language...
Classification
keywords
- clinical text; contextual information; deep learning; long short-term memory; natural language processing