We forget the GIGO principle at our peril, even as it wreaks havoc on the world. Garbage In, Garbage Out is a principle in computing that states that flawed input data will lead to flawed output. In logic, it’s the difference between a valid argument and an unsound one. An argument can be structurally valid, but if its premises are flawed, its conclusion will be unsound.
If that sounds confusing, simply consider this: All pigs can fly. Porky is a pig. Therefore Porky can fly. The argument is valid, in that the conclusion flows correctly from the premises. But we all know that pigs cannot fly: the first premise is flawed, leading to a flawed conclusion. GIGO.
Similar, a computer model or algorithm can be as sophisticated as they can boast, but if it’s fed bad, biased or incomplete data, the resulting output will be bad. We see the consequences of this problem all the time, in climate models, for instance. Economic modelling, too. Crime statistics that ignore critical information (ethnicity, say) also lead to badly skewed conclusions.
AI is only making it worse. Internet searches are rapidly becoming dominated by AI. Wikipedia is already reporting that AI is outnumbering human visitors on their site. AIs, it must be remember, that were often trained in the first place by Wikipedia data. What may have been minor errors or falsehoods (remember the prankster who listed himself on Wikipedia as the inventor of the Golden Gaytime ice-cream) are rapidly amplified by repetition.
Even where the input data isn’t garbage, AI’s ability to interpret it correctly is notoriously bad. Recently, as part of research for a writing project, I asked ChatGPT to compile a map of land-use in 13th-century Europe. Imagine my surprise to find that Augsburg is somewhere near Florence, Bruges is in the English Midlands and London is slightly west of Dublin.
When it comes to reporting the news, AI is even worse than it is at geography.
Leading AI assistants misrepresent news content in nearly half their responses, according to new research published on Wednesday by the European Broadcasting Union (EBU) and the BBC.
When it’s so bad at reporting that even the BBC can see it, it’s bad.
The international research studied 3,000 responses to questions about the news from leading artificial intelligence assistants – software applications that use AI to understand natural language commands to complete tasks for a user.
It assessed AI assistants in 14 languages for accuracy, sourcing and ability to distinguish opinion versus fact, including ChatGPT, Copilot, Gemini and Perplexity.
Overall, 45 per cent of the AI responses studied contained at least one significant issue, with 81 per cent having some form of problem, the research showed.
So, it’s doing better than the legacy media? Is this just professional jealousy we’re seeing?
Some seven per cent of all online news consumers and 15 per cent of those aged under 25 use AI assistants to get their news, according to the Reuters Institute’s Digital News Report 2025.
Yes, I think it just might be professional jealousy. Feeling threatened, are we, legacy media?
Other leading technology firms recognise the problem of so-called hallucinations, where AI models generate incorrect or misleading information.
YouTuber Dave Cullen ran into this phenomenon when he asked AI to write a precis of a particular novel. While it started out OK, it soon started to hallucinate, inventing characters and plot twists that weren’t there.
At least AI developers are willing to admit that their chatbots are hallucinating. The legacy media just call that ‘narrative affirmation’.
A third of AI assistants’ responses showed serious sourcing errors such as missing, misleading or incorrect attribution, according to the study.
Some 72 per cent of responses by Gemini, Google’s AI assistant, had significant sourcing issues, compared to below 25 per cent for all other assistants, it said.
Issues of accuracy were found in 20 per cent of responses from all AI assistants studied, including outdated information, it said.
Examples cited by the study included Gemini incorrectly stating changes to a law on disposable vapes and ChatGPT reporting Pope Francis as the current Pope several months after his death.
But even AI couldn’t come up with something as hilariously un-self-aware as this:
“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” EBU Media Director Jean Philip De Tender said in a statement.
Is someone going to tell him?