Dingfelder P, Rieß C (2026)
Publication Type: Conference contribution
Publication year: 2026
Publisher: Springer Science and Business Media Deutschland GmbH
Book Volume: 16244 LNCS
Pages Range: 250-262
Conference Proceedings Title: Lecture Notes in Computer Science
Event location: Be'er Sheva, ISR
ISBN: 9783032107589
DOI: 10.1007/978-3-032-10759-6_16
Large language models are increasingly used for many applications. To prevent illicit use, it is desirable to be able to detect AI-generated text. Training and evaluation of such detectors critically depend on suitable benchmark datasets. Several groups took on the tedious work of collecting, curating, and publishing large and diverse datasets for this task. However, it remains an open challenge to ensure high quality in all relevant aspects of such a dataset. For example, the DetectRL benchmark exhibits relatively simple patterns of AI-generation in 98.5% of the Claude-LLM data. These patterns may include introductory words such as “Sure! Here is the academic article abstract:”, or instances where the LLM rejects the prompted task. In this work, we demonstrate that detectors trained on such data use such patterns as shortcuts, which facilitates spoofing attacks on the trained detectors. We consequently reprocessed the DetectRL dataset with several cleansing operations. Experiments show that such data cleansing makes direct attacks more difficult. The reprocessed dataset is publicly available.
APA:
Dingfelder, P., & Rieß, C. (2026). Sure! Here’s a Short and Concise Title for Your Paper: “Contamination in Generated Text Detection Benchmarks”. In Adi Akavia, Shlomi Dolev, Anna Lysyanskaya, Rami Puzis (Eds.), Lecture Notes in Computer Science (pp. 250-262). Be'er Sheva, ISR: Springer Science and Business Media Deutschland GmbH.
MLA:
Dingfelder, Philipp, and Christian Rieß. "Sure! Here’s a Short and Concise Title for Your Paper: “Contamination in Generated Text Detection Benchmarks”." Proceedings of the 9th International Symposium on Cyber Security, Cryptology, and Machine Learning, CSCML 2025, Be'er Sheva, ISR Ed. Adi Akavia, Shlomi Dolev, Anna Lysyanskaya, Rami Puzis, Springer Science and Business Media Deutschland GmbH, 2026. 250-262.
BibTeX: Download