DAMAGE: Detecting Adversarially Modified AI Generated Text
Clicks: 9
ID: 283241
2025
AI humanizers are a new class of online software tools meant to paraphrase
and rewrite AI-generated text in a way that allows them to evade AI detection
software. We study 19 AI humanizer and paraphrasing tools and qualitatively
assess their effects and faithfulness in preserving the meaning of the original
text. We show that many existing AI detectors fail to detect humanized text.
Finally, we demonstrate a robust model that can detect humanized AI text while
maintaining a low false positive rate using a data-centric augmentation
approach. We attack our own detector, training our own fine-tuned model
optimized against our detector's predictions, and show that our detector's
cross-humanizer generalization is sufficient to remain robust to this attack.
Reference Key |
spero2025damage
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
---|---|
Authors | Elyas Masrour; Bradley Emi; Max Spero |
Journal | arXiv |
Year | 2025 |
DOI | DOI not found |
URL | |
Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.