Mediahuis fellow Peter Vandermeersch admitted using ChatGPT, Perplexity, and NotebookLM to summarise reports without verifying AI-generated quotes, publishing dozens of fabricated attributions.
Peter Vandermeersch, a senior fellow at European publisher Mediahuis and former editor-in-chief of NRC, was suspended after admitting he used AI tools including ChatGPT, Perplexity, and Google's NotebookLM to summarise reports for his Substack newsletter without fact-checking the outputs. An investigation by NRC found 'dozens' of false quotes, with seven named individuals confirming they never made the statements attributed to them. Vandermeersch publicly admitted the error, noting the quotes were 'so good' they were 'irresistible' — a textbook hallucination trap. He has been suspended from his Mediahuis fellowship role.
This is a high-profile case study in what happens when AI summarisation pipelines have no verification layer. The core failure is using LLMs like ChatGPT and NotebookLM as trusted extraction tools rather than drafting assistants — the models generated plausible-sounding quotes that never existed. If your product surfaces AI-generated text attributed to real people or sources, you have the same liability surface.
If your app generates summaries or quotes from documents, test your pipeline this week by feeding a known source into your LLM and comparing every attributed quote against the original — measure your hallucination rate before a user does it publicly.
Paste a news article into ChatGPT with this prompt: 'Summarise this article and include 3 direct quotes from named individuals.' Then check every quote against the original text. Count how many are fabricated or distorted. You'll have a visible hallucination rate in under 3 minutes.
Tags
Also today
Signals by role
Also today
Tools mentioned