{"id":28224,"date":"2025-08-11T15:10:17","date_gmt":"2025-08-11T15:10:17","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=28224"},"modified":"2025-08-11T15:10:19","modified_gmt":"2025-08-11T15:10:19","slug":"ai-chatbots-prone-to-spreading-medical-misinformation-new-study","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/ai-chatbots-prone-to-spreading-medical-misinformation-new-study\/","title":{"rendered":"AI Chatbots Prone to Spreading Medical Misinformation: New Study"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A new study by Mount Sinai researchers finds that AI chatbots are prone to spreading medical misinformation, but simple prompts can reduce errors significantly, emphasizing the need for stronger safeguards in health care.<br><\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Researchers at the Icahn School of Medicine at Mount Sinai have discovered that AI chatbots, commonly used in health care, are highly susceptible to spreading false medical information. This revelation underscores the urgent need for stronger safeguards to ensure these tools deliver accurate advice.&nbsp;<\/p>\n\n\n\n<p>Their  findings, <a href=\"https:\/\/www.nature.com\/articles\/s43856-025-01021-3.epdf\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in the journal Communications Medicine, suggest that implementing a simple built-in warning prompt can significantly mitigate this risk.&nbsp;<\/p>\n\n\n\n<p>Lead author Mahmud Omar, an independent consultant with the research team,&nbsp;explains the vulnerability observed.<\/p>\n\n\n\n<p>&#8220;What we saw across the board is that AI chatbots can be easily misled by false medical details, whether those errors are intentional or accidental,&#8221; he said in a news release. &#8220;They not only repeated the misinformation but often expanded on it, offering confident explanations for non-existent conditions. The encouraging part is that a simple, one-line warning added to the prompt cut those hallucinations dramatically, showing that small safeguards can make a big difference.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Experiment Details<\/h2>\n\n\n\n<p>The research team created fictional patient scenarios incorporating fabricated medical terms, such as made-up diseases or symptoms. <\/p>\n\n\n\n<p>These scenarios were submitted to leading AI models. Initially, no additional guidance was provided, resulting in chatbots confidently producing erroneous information. However, when a one-line caution was added, reminding the AI that the information might be inaccurate, the rate of errors dropped significantly.<\/p>\n\n\n\n<p>\u201cOur goal was to see whether a chatbot would run with false information if it was slipped into a medical question, and the answer is yes,\u201d added co-corresponding senior author Eyal Klang, chief of generative AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. <\/p>\n\n\n\n<p>&#8220;Even a single made-up term could trigger a detailed, decisive response based entirely on fiction. But we also found that the simple, well-timed safety reminder built into the prompt made an important difference, cutting those errors nearly in half,&#8221; he added. &#8220;That tells us these tools can be made safer, but only if we take prompt design and built-in safeguards seriously.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Future Implications<\/h2>\n\n\n\n<p>The research team plans to apply this approach to real, anonymized patient records and test advanced safety prompts and retrieval tools. They believe their &#8220;fake-term&#8221; method could be a powerful tool for hospitals, tech developers and regulators to stress-test AI systems before clinical applications.<\/p>\n\n\n\n<p>\u201cOur study shines a light on a blind spot in how current AI tools handle misinformation, especially in health care,\u201d co-corresponding senior author&nbsp;Girish N. Nadkarni, the chair of the\u202fWindreich Department of Artificial Intelligence and Human Health, said in the news release. <\/p>\n\n\n\n<p>\u201cIt underscores a critical vulnerability in how today\u2019s AI systems deal with misinformation in health settings. A single misleading phrase can prompt a confident yet entirely wrong answer. The solution isn\u2019t to abandon AI in medicine, but to engineer tools that can spot dubious input, respond with caution, and ensure human oversight remains central. We\u2019re not there yet, but with deliberate safety measures, it\u2019s an achievable goal,\u201d added Nadkarni, who is also the director of the\u202fHasso Plattner Institute for Digital Health, and Irene and Dr. Arthur M. Fishberg&nbsp;Professor&nbsp;of Medicine at the Icahn School of Medicine at Mount Sinai and the chief AI officer for the Mount Sinai Health System.<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source:<\/strong> <a href=\"https:\/\/www.mountsinai.org\/about\/newsroom\/2025\/ai-chatbots-can-run-with-medical-misinformation-study-finds-highlighting-the-need-for-stronger-safeguards\" target=\"_blank\" rel=\"noopener\" title=\"\">Mount Sinai School of Medicine<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers at the Icahn School of Medicine at Mount Sinai have discovered that AI chatbots, commonly used in health care, are highly susceptible to spreading false medical information. This revelation underscores the urgent need for stronger safeguards to ensure these tools deliver accurate advice.&nbsp; Their findings, published in the journal Communications Medicine, suggest that implementing [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[172],"class_list":["post-28224","post","type-post","status-publish","format-standard","hentry","category-ai","tag-icahn-school-of-medicine-at-mount-sinai"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"Researchers at the Icahn School of Medicine at Mount Sinai have discovered that AI chatbots, commonly used in health care, are highly susceptible to spreading false medical information. This revelation underscores the urgent need for stronger safeguards to ensure these tools deliver accurate advice.&nbsp; Their findings, published in the journal Communications Medicine, suggest that implementing&hellip;","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/28224","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=28224"}],"version-history":[{"count":11,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/28224\/revisions"}],"predecessor-version":[{"id":28235,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/28224\/revisions\/28235"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=28224"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=28224"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=28224"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}