A new study from the University of Zurich shows that public concern is higher for immediate AI risks, like social prejudices and misinformation, than distant apocalyptic scenarios.
While futuristic scenarios where artificial intelligence (AI) endangers humanity captivate imaginations, the immediate risks posed by AI technology are currently a greater concern for many. This is the conclusion of recent research from the University of Zurich, published in the Proceedings of the National Academy of Sciences, .
In a series of three extensive online experiments involving over 10,000 participants from the United States and the UK, the research revealed a prevalent trend: people are significantly more apprehensive about the present threats of AI, such as enhancing social biases and spreading misinformation, than hypothetical future risks of AI dominating the human race.
Examining Public Perception
The participants in the study were exposed to varying types of headlines — some depicting AI as a catastrophic future threat, others discussing its current dangers, and yet others highlighting the potential benefits.
The researchers aimed to discern whether predictions of a dystopian AI future would distract from addressing its current problems.
“Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes,” Fabrizio Gilardi, a professor in the Department of Political Science at the University of Zurich, said in a news release.
Distinguishing Immediate From Long-term Risks
The study’s results, published in the Proceedings of the National Academy of Sciences, underscore the ability of people to distinguish between the tangible problems posed by AI today and the theoretical long-term risks.
This insight is critical, as it suggests that discussions about future existential threats do not diminish public attentiveness to today’s pressing issues.
In fact, the research highlights how current AI-related challenges, such as systematic bias in algorithms and potential job losses, are issues of significant public concern.
Broader Implications for Public Discourse
This research is pivotal, addressing a critical gap in our understanding of public perceptions of AI. It challenges the fear that focusing on future, catastrophic scenarios might overshadow urgent, ongoing issues.
“Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems,” added co-author Emma Hoes, a postdoctoral research fellow in the Department of Political Science at the University of Zurich.
The researchers call for a balanced discourse around AI that considers both immediate and future risks.
“The public discourse shouldn’t be ‘either-or.’ A concurrent understanding and appreciation of both the immediate and potential future challenges is needed,” Gilardi added.
Source: University of Zurich

