Breaking News: Latest Updates on [Topic] You Need to Know

AI Chatbots Found to Frequently Misreport News, Study Finds:

A new study has revealed that popular AI tools like ChatGPT, Gemini, Copilot, and Perplexity are still far from reliable when it comes to accurately reporting news. The research, conducted by the European Broadcasting Union (EBU) and the BBC, found that nearly half of the AI-generated responses contained significant issues, raising serious concerns about the current state of AI-generated information. The study analyzed over 2,700 responses from these AI assistants, collected between late May and early June 2025. Journalists from 22 public media organizations across 18 countries participated, asking the same set of questions to each AI tool in 14 different languages. The goal was to test the accuracy, sourcing, and contextual reliability of the information provided. The results were troubling: 45% of the AI responses were found to have at least one major flaw. The most common issue was sourcing errors, found in 31% of the responses. This included citing incorrect or unverifiable sources, or presenting information that wasn't actually supported by the source it referenced. The second most frequent problem was factual inaccuracy, which was found in 20% of the responses. This included basic errors that should have been easily avoidable with accurate data. One glaring example: ChatGPT reportedly referred to Pope Francis as still being alive and serving as Pope, even months after his death. Another 14% of responses lacked adequate context, which led to misleading or incomplete answers. These kinds of omissions can make it difficult for users to fully understand a topic, especially in sensitive or complex news situations. Among all the AI models tested, Google’s Gemini performed the worst, with 76% of its responses containing serious sourcing problems. However, all platforms made basic mistakes, highlighting a widespread issue with the current state of AI-generated information. In one case, Perplexity falsely claimed that surrogacy was illegal in Czechia — a clear factual error that could have easily been avoided with better oversight. Despite the seriousness of the findings, none of the companies behind the AI tools — OpenAI, Google, Microsoft, or Perplexity — provided an immediate response to the report. In the foreword to the study, Jean Philip De Tender, Deputy Director General of the EBU, and Pete Archer, Head of AI at the BBC, urged technology companies to step up their efforts. “They have not prioritised this issue and must do so now,” the two leaders wrote. They also emphasized the importance of transparency, calling on AI companies to publish regular accuracy reports by language and region, so users can better understand the limitations and risks of the tools they are using. As AI becomes more deeply embedded in everyday information delivery, particularly around news, the findings underscore the need for improved accuracy, better sourcing practices, and stronger oversight. For now, the message is clear: don’t take AI-generated news at face value — always double-check the facts.

TECHNOLOGY

Shekh Md Hamid

10/22/20251 min read