How UF is combatting AI misinformation in elections
As the 2024 election season ramps up, voters are looking for reliable information about candidates to help them decide who to support on election day.
Unfortunately, that reliability is threatened by a wave of disinformation that is bolstered by increasingly sophisticated artificial intelligence tools.
“In 2024, we should be most concerned about the role of generative AI, which is an emerging technology that can create video, text, or images of people saying things they didn’t say and doing things they didn’t do,” said Janet Coats, managing director of the University of Florida’s Consortium on Trust in Media and Technology and an expert in addressing AI misinformation in the news media.
These generative AI videos, also known as deepfakes, could convincingly show a politician saying offensive things or endorsing policies their constituents dislike. On the other hand, the very existence of AI deepfakes could allow public officials to obfuscate their real bad behavior – after all, they can always claim that an authentic video showing their misdeeds was a deepfake.
UF is fighting back against this threat. “Our researchers are looking at ways to use AI tools to help identify what generative AI is doing,” Coats said. “So, using the tools themselves to detect fake news, false information, and disinformation.”
The College of Journalism and Communications, which houses the Consortium on Trust in Media and Technology, is also teaching media literacy to help students protect themselves against AI misinformation and deepfakes. By teaching the principles of lateral reading, which encourages news consumers to consult multiple sources to verify questionable information, the college is training a new generation of journalists and non-journalists alike to distinguish trustworthy information from fake news.
“You have to tell yourself that this is something that really matters to me and I’m going to check it out,” Coats said.