Advertisement

AI Chatbots Providing Misinformation Ahead of EU Elections

A recent study has revealed concerning inaccuracies in information provided by some of Europe’s most popular AI chatbots regarding the upcoming European Elections. Conducted by Democracy Reporting International, a non-profit organization based in Berlin, the study scrutinized the responses of four widely used AI chatbots: Google’s Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s Copilot.

Between March 11 and 14, researchers posed 400 election-related questions in 10 different languages to these chatbots, aiming to assess their ability to provide reliable and accurate information. Despite being tuned to avoid partisan responses, none of the four chatbots were able to consistently deliver trustworthy answers.

“We were not that surprised,” said Michael-Meyer Resende, the executive director of Democracy Reporting International. “When you ask [AI chatbots] something for which they didn’t have a lot of material and for which you don’t find a lot of information for on the Internet, they just invent something.”

Advertisement

The study identified various shortcomings, including the dissemination of false information and the provision of broken or irrelevant links in responses. Additionally, chatbots often struggled with questions related to voter registration and voting processes, demonstrating a tendency to “hallucinate” or manufacture information when unsure.

Google’s Gemini, in particular, exhibited the poorest performance, providing the most misleading or false information and the highest number of refusals to answer queries. In response, Google has implemented further restrictions on its large language model (LLM) and encouraged users to utilize Google Search for accurate election information.

“We think it’s better for them to refuse to answer than to give false answers,” Resende commented, emphasizing the importance of ensuring the reliability of information provided by AI chatbots.

Looking ahead, Democracy Reporting International plans to re-conduct testing of Google’s Gemini to evaluate whether the implemented restrictions have addressed the issues identified in the study.

In light of these findings, there are calls for increased transparency and accountability from the companies behind the chatbots, as well as regulatory bodies such as the European Commission. The Digital Services Act (DSA), enacted by the European Commission, mandates risk assessments for online platforms to mitigate the dissemination of misinformation, particularly during electoral processes. However, concerns remain regarding the adequacy of these assessments and the transparency of their findings.

author avatar
Staff Report

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement