{"id":23020,"date":"2025-04-23T16:27:12","date_gmt":"2025-04-23T16:27:12","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=23020"},"modified":"2025-04-23T16:28:13","modified_gmt":"2025-04-23T16:28:13","slug":"current-ai-risks-more-alarming-than-future-threats-new-study","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/current-ai-risks-more-alarming-than-future-threats-new-study\/","title":{"rendered":"Current AI Risks More Alarming Than Future Threats"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A new study from the University of Zurich shows that public concern is higher for immediate AI risks, like social prejudices and misinformation, than distant apocalyptic scenarios.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>While futuristic scenarios where artificial intelligence (AI) endangers humanity captivate imaginations, the immediate risks posed by AI technology are currently a greater concern for many. This is the conclusion of recent research from the University of Zurich, <a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2419055122\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in the Proceedings of the National Academy of Sciences, .<\/p>\n\n\n\n<p>In a series of three extensive online experiments involving over 10,000 participants from the United States and the UK, the research revealed a prevalent trend: people are significantly more apprehensive about the present threats of AI, such as enhancing social biases and spreading misinformation, than hypothetical future risks of AI dominating the human race.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Examining Public Perception<\/h2>\n\n\n\n<p>The participants in the study were exposed to varying types of headlines \u2014 some depicting AI as a catastrophic future threat, others discussing its current dangers, and yet others highlighting the potential benefits. <\/p>\n\n\n\n<p>The researchers aimed to discern whether predictions of a dystopian AI future would distract from addressing its current problems.<\/p>\n\n\n\n<p>\u201cOur findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes,&#8221; Fabrizio Gilardi, a professor in the Department of Political Science at the University of Zurich, said in a news release.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Distinguishing Immediate From Long-term Risks<\/h2>\n\n\n\n<p>The study\u2019s results, <a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2419055122\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in the Proceedings of the National Academy of Sciences, underscore the ability of people to distinguish between the tangible problems posed by AI today and the theoretical long-term risks. <\/p>\n\n\n\n<p>This insight is critical, as it suggests that discussions about future existential threats do not diminish public attentiveness to today\u2019s pressing issues.<\/p>\n\n\n\n<p>In fact, the research highlights how current AI-related challenges, such as systematic bias in algorithms and potential job losses, are issues of significant public concern.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Broader Implications for Public Discourse<\/h2>\n\n\n\n<p>This research is pivotal, addressing a critical gap in our understanding of public perceptions of AI. It challenges the fear that focusing on future, catastrophic scenarios might overshadow urgent, ongoing issues.<\/p>\n\n\n\n<p>\u201cOur study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems,\u201d added co-author Emma Hoes, a postdoctoral research fellow in the Department of Political Science at the University of Zurich.<\/p>\n\n\n\n<p>The researchers call for a balanced discourse around AI that considers both immediate and future risks.&nbsp;<\/p>\n\n\n\n<p>\u201cThe public discourse shouldn\u2019t be \u2018either-or.\u2019 A concurrent understanding and appreciation of both the immediate and potential future challenges is needed,\u201d Gilardi added.<\/p>\n\n\n\n<div style=\"height:12px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source:<\/strong> <a href=\"https:\/\/www.news.uzh.ch\/en\/articles\/media\/2025\/fear-of-ki-risks.html\" target=\"_blank\" rel=\"noopener\" title=\"\">University of Zurich<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>While futuristic scenarios where artificial intelligence (AI) endangers humanity captivate imaginations, the immediate risks posed by AI technology are currently a greater concern for many. This is the conclusion of recent research from the University of Zurich, published in the Proceedings of the National Academy of Sciences, . In a series of three extensive online [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[223],"class_list":["post-23020","post","type-post","status-publish","format-standard","hentry","category-ai","tag-university-of-zurich"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"While futuristic scenarios where artificial intelligence (AI) endangers humanity captivate imaginations, the immediate risks posed by AI technology are currently a greater concern for many. This is the conclusion of recent research from the University of Zurich, published in the Proceedings of the National Academy of Sciences, . In a series of three extensive online&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/23020","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=23020"}],"version-history":[{"count":11,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/23020\/revisions"}],"predecessor-version":[{"id":23085,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/23020\/revisions\/23085"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=23020"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=23020"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=23020"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}