{"id":34172,"date":"2026-02-10T20:16:23","date_gmt":"2026-02-10T20:16:23","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=34172"},"modified":"2026-02-10T20:16:26","modified_gmt":"2026-02-10T20:16:26","slug":"chatbot-bias-can-sway-what-you-buy-uc-san-diego-study-finds","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/chatbot-bias-can-sway-what-you-buy-uc-san-diego-study-finds\/","title":{"rendered":"Chatbot Bias Can Sway What You Buy, UC San Diego Study Finds"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">Chatbots that summarize product reviews can quietly shift how people feel about what they read \u2014 and what they buy. A new UC San Diego study shows just how powerful that influence can be, and why it matters far beyond shopping.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>A short, friendly chatbot summary of a product review might feel harmless. But new research from the University of California San Diego suggests it can significantly change what people decide to do next.<\/p>\n\n\n\n<p>In an experiment with online product reviews, customers were 32% more likely to say they would buy a product after reading a chatbot-generated summary than after reading the original human-written review. The study found that large language models, or LLMs, often add a subtle but powerful positive spin that nudges people toward a purchase.<\/p>\n\n\n\n<p>The work is among the first to show, with numbers, that cognitive biases introduced by LLMs can have real-world consequences for users\u2019 decisions, according to the research team. It also offers one of the first quantitative measures of that impact.<\/p>\n\n\n\n<p>To see how this plays out in practice, the researchers focused on a common use case: AI tools that summarize long user reviews for products like headsets, headlamps and radios. They examined how often LLMs changed the overall sentiment of reviews and how those changes affected human readers.<\/p>\n\n\n\n<p>They found that LLM-generated summaries shifted the sentiment of the original reviews in 26.5% of cases. In other words, more than a quarter of the time, the summary did not just shorten the review \u2014 it changed its tone, for example from more negative or mixed to more positive.<\/p>\n\n\n\n<p>The team then recruited 70 participants and randomly assigned them to read either the original reviews or the AI-generated summaries. When people read the chatbot summaries, they said they would buy the products in 84% of cases. When they read the original reviews, that number dropped to 52%.<\/p>\n\n\n\n<p>The size of the effect surprised the team, according to first author Abeer Alessa, who conducted the work as a master\u2019s student in computer science at UC San Diego. <\/p>\n\n\n\n<p>\u201cWe did not expect how big the impact of the summaries would be,\u201d Alessa said in a news release. \u201cOur tests were set in a low-stakes scenario. But in a high-stakes setting, the impact could be much more extreme.\u201d<\/p>\n\n\n\n<p>The study helps explain how this bias creeps in. LLMs tend to lean heavily on the beginning of the text they summarize and may gloss over important details or caveats that appear later. That can flatten nuance and make reviews sound more uniformly positive or negative than they really are.<\/p>\n\n\n\n<p>The researchers also probed another well-known weakness of LLMs: hallucinations, or confident-sounding statements that are not supported by the underlying data. In tests involving questions about news items that could be easily fact-checked, the models hallucinated 60% of the time when the answers were not part of the original training data used in the study.<\/p>\n\n\n\n<p>The team described this as a serious limitation for any setting where accuracy matters. <\/p>\n\n\n\n<p>\u201cThis consistently low accuracy highlights a critical limitation: the persistent inability to reliably differentiate fact from fabrication,\u201d they wrote.<\/p>\n\n\n\n<p>That pattern is especially concerning as chatbots are increasingly used to summarize news, explain policies or answer questions about current events. In those contexts, a biased or fabricated detail is not just a shopping nudge \u2014 it could shape opinions about politics, health, education or public policy.<\/p>\n\n\n\n<p>To understand how widespread these issues are, the researchers tested a range of models: three small open-source systems (Phi-3-mini-4k-Instruct, Llama-3.2-3B-Instruct and Qwen3-4B-Instruct), a medium-sized model (Llama-3-8B-Instruct), a large open-source model (Gemma-3-27B-IT) and a closed-source model (GPT-3.5-turbo). Bias and hallucinations showed up across this spectrum, though not in identical ways.<\/p>\n\n\n\n<p>The team then tried to fix the problem. They evaluated 18 different mitigation methods designed to reduce bias and hallucinations or to keep summaries closer to the original content. Some approaches helped in certain situations or with specific models, but none worked reliably across the board. In some cases, a method that reduced one problem made the model less reliable in another way.<\/p>\n\n\n\n<p>\u201cThere is a difference between fixing bias and hallucinations at large and fixing these issues in specific scenarios and applications,\u201d added senior author Julian McAuley, a professor of computer science at the UC San Diego Jacobs School of Engineering.<\/p>\n\n\n\n<p>That distinction points toward a likely future in AI: rather than expecting one-size-fits-all solutions, developers and policymakers may need targeted safeguards for particular uses, such as e-commerce, education, media or government services.<\/p>\n\n\n\n<p>The UC San Diego team frames its work as an early but important step in that direction. <\/p>\n\n\n\n<p>\u201cOur paper represents a step toward careful analysis and mitigation of content alteration induced by LLMs to humans, and provides insight into its effects, aiming to reduce the risk of systemic bias in decision-making across media, education and public policy,\u201d the researchers wrote.<\/p>\n\n\n\n<p>More broadly, the findings highlight a growing responsibility for companies, institutions and everyday users. As LLMs become embedded in search engines, shopping platforms, learning tools and news apps, their invisible framing choices can quietly shape what people believe, buy and support.<\/p>\n\n\n\n<p>For students, educators and consumers, the takeaway is not to avoid AI entirely but to approach its outputs with healthy skepticism. A polished summary may be faster to read, but it is not always a faithful reflection of the original information \u2014 and, as this study shows, it can change your mind more than you realize.<\/p>\n\n\n\n<p>The <a href=\"https:\/\/aclanthology.org\/2025.ijcnlp-long.155.pdf\" target=\"_blank\" rel=\"noopener\" title=\"\">research<\/a>, titled \u201cQuantifying Cognitive Bias Induction in LLM-Generated Content,\u201d was presented at the International Joint Conference on Natural Language Processing &amp; Asia-Pacific Chapter of the Association for Computational Linguistics in December 2025.<\/p>\n\n\n\n<div style=\"height:12px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source: <\/strong><a href=\"https:\/\/today.ucsd.edu\/story\/how-much-does-chatbot-bias-influence-users-a-lot-it-turns-out\" target=\"_blank\" rel=\"noopener\" title=\"\">University of California San Diego<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Chatbots that summarize product reviews can quietly shift how people feel about what they read \u2014 and what they buy. A new UC San Diego study shows just how powerful that influence can be, and why it matters far beyond shopping.<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[206],"class_list":["post-34172","post","type-post","status-publish","format-standard","hentry","category-ai","tag-uc-san-diego"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"Chatbots that summarize product reviews can quietly shift how people feel about what they read \u2014 and what they buy. A new UC San Diego study shows just how powerful that influence can be, and why it matters far beyond shopping.","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/34172","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=34172"}],"version-history":[{"count":7,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/34172\/revisions"}],"predecessor-version":[{"id":34297,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/34172\/revisions\/34297"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=34172"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=34172"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=34172"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}