You can use AI to spy out AI!
GPTZero, the startup behind an artificial intelligence (AI) detector that checks for large language model (LLM)-generated content, has found that 50 peer-reviewed submissions to the International Conference on Learning Representations (ICLR) contain at least one obvious hallucinated citation—meaning a citation that was dreamed up by AI. ICLR is the leading academic conference that focuses on the deep-learning branch of AI.
The three authors behind the investigation, all based in Toronto, used their Hallucination Check tool on 300 papers submitted to the conference. According to the report, they found that 50 submissions included at least one “obvious” hallucination. Each submission had been reviewed by three to five peer experts, “most of whom missed the fake citations.” Some of these citations were written by non-existent authors, incorrectly attributed to journals, or had no equivalent match at all.
The report notes that without intervention, the papers were rated highly enough that they “would almost certainly have been published.”
It’s worse than it may sound at first. One sixth of the papers in this sample had citations invented by an AI…but the citations are the foundation of the work described in those papers. The authors of those papers apparently didn’t do the background reading for their research, and just slapped on a list of invented work to make it look like they were serious scholars. They clearly aren’t.
The good news is that GPTZero got a legitimate citation out of it!











