Adversarial Attack on NLP
Last modified: 2023-10-05
Adversarial examples causes NLP models to misrecognition.
Automation
Using TextAttack
TextAttack is a Python framework for adversarial attacks, training models in NLP.
# TextFooler
textattack attack --model bert-base-uncased-mr --recipe textfooler --num-examples 100
# DeepWordBug
textattack attack --model distilbert-base-uncased-cola --recipe deepwordbug --num-examples 100