Content area

Abstract

Adversarial Attacks are actions that aims to mislead models by introducing subtle and often imperceptible changes in model’s input. Providing resilience for such kind of risk is key for all Natural Language Processing (NLP) task specific models. Current state of the art solution for one of NLP task Named Entity Recognition (NER) is usage of transformer based solutions. Previous solution where based on Conditional Random Fields (CRF).This research aims to investigate and compare the robustness of both transformer-based and CRF-based NER models against adversarial attacks. By subjecting these models to carefully crafted perturbations, we seek to understand how well they can withstand attempts to manipulate their input and compromise their performance. This comparative analysis will provide valuable insights into the strengths and weaknesses of each architecture, shedding light on the most effective strategies for enhancing the security and reliability of NER systems.

Details

1009240
Title
Resilience of Named Entity Recognition models against adversarial attacks
Volume
71
Issue
3
Pages
1–6
Number of pages
7
Publication year
2025
Publication date
2025
Publisher
Polish Academy of Sciences
Place of publication
Warsaw
Country of publication
Poland
Publication subject
ISSN
20818491
e-ISSN
23001933
Source type
Scholarly Journal
Language of publication
English
Document type
Journal Article
Publication history
 
 
Online publication date
2025-07-11
Publication history
 
 
   First posting date
11 Jul 2025
ProQuest document ID
3232456790
Document URL
https://www.proquest.com/scholarly-journals/resilience-named-entity-recognition-models/docview/3232456790/se-2?accountid=208611
Copyright
© 2025. This work is licensed under https://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2025-10-06
Database
ProQuest One Academic