AI is transforming gambling, but what are the ethical risks? A UF researcher explores
As gamers and spectators prepare for the 2025 World Series of Poker in Las Vegas on May 27, a cultural conversation around AI and ethics in gambling is brewing.
Though the gambling industry is expected to exceed $876 billion worldwide by 2026, there is a growing concern that unregulated AI systems can exploit vulnerable individuals and profit from them.
UF researcher Nasim Binesh, Ph.D., M.B.A., an assistant professor in the UF College of Health & Human Performance’s Department of Tourism, Hospitality & Event Management, is exploring this concern, having recently published a study in the International Journal of Hospitality & Tourism Administration about identifying the risks and ethics of using AI in gambling.
Few regulations exist in the U.S. and abroad for AI use and gambling. For example, the U.S. Blueprint for an AI Bill of Rights and the European Union AI Act have sought to govern AI use, but they are not industry-specific. More recently, in March, the International Gaming Standards Association’s Ethical AI Standards Committee, responsible for ensuring that the use of AI in gaming is fair, announced that it would begin developing a best practices framework to help gambling regulators better understand AI’s role in the industry.
“AI systems, which are designed to optimize profit, could identify and target players susceptible to addiction, pushing them deeper into harmful behaviors,” said Binesh, who co-authored the study with Kasra Ghaharian, Ph.D., the director of research for the University of Nevada, Las Vegas International Gaming Institute.
The gambling industry, which has expanded from land-based casinos and lotteries to online gambling and sports betting, has the potential to boost local economies and bolster tourism in places like Las Vegas and Macau, China. But the industry is also prone to misconduct and poses risks to people’s finances and mental health.
“The potential for AI to exacerbate gambling harms and exploit vulnerable individuals is a stark reality that demands immediate and informed action,” Binesh said. “The study’s call for the clear use of AI guidelines is not just a recommendation; it is imperative for the future of ethical gambling.”
For example, the study suggests that certain guidelines are necessary, such as employing independent auditors to assess AI systems for ethical compliance and identify potential red flags, providing training to AI developers on best practices when working with at-risk populations, ensuring that game decisions made by AI are easily understood by players, and informing players about how their data is being collected and used.
Additionally, the study discovered that AI has the potential to protect players by detecting early signs of addiction and fraud and even identify cheating.
“AI’s potential to enhance consumer protection by identifying at-risk behaviors and intervening appropriately is well acknowledged,” Binesh said. “Yet, without regulation, these technologies could be underused or misapplied, missing critical interventions and failing to mitigate harms with gambling.”
Ongoing research and regulation are necessary as AI evolves and gambling gains traction globally. In the future, Binesh plans to expand her research in gambling into consumer data and AI to detect markers of harm, such as analyzing social media use and uncovering early signs of gambling risks.
“Ironically, the lack of AI regulation could stifle the very innovation it seeks to foster,” Binesh said. “Ethical controversies and backlash against these unregulated practices might lead to more restrictive policies and hinder AI advancement. These types of unregulated environments can also deter responsible innovators who are crucial for sustainable and ethical industry growth.”