{"id":35843,"date":"2026-04-07T17:48:00","date_gmt":"2026-04-07T17:48:00","guid":{"rendered":"https:\/\/www.tun.com\/home\/?p=35843"},"modified":"2026-04-09T22:01:10","modified_gmt":"2026-04-09T22:01:10","slug":"students-trust-ai-chatbots-less-once-they-know-its-ai-study-finds","status":"publish","type":"post","link":"https:\/\/www.tun.com\/home\/students-trust-ai-chatbots-less-once-they-know-its-ai-study-finds\/","title":{"rendered":"Students Trust AI Chatbots Less Once They Know It\u2019s AI, Study Finds"},"content":{"rendered":"\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-uagb-blockquote uagb-block-e7eb3fc3 uagb-blockquote__skin-border uagb-blockquote__stack-img-none\"><blockquote class=\"uagb-blockquote\"><div class=\"uagb-blockquote__content\">A new University of Cincinnati study suggests nursing students often prefer AI chatbot answers to a professor\u2019s \u2014 until they realize the response came from a machine. The findings highlight how trust, not just accuracy, may shape the future of AI in higher education.<\/div><footer><div class=\"uagb-blockquote__author-wrap uagb-blockquote__author-at-left\"><\/div><\/footer><\/blockquote><\/div>\n\n\n\n<div class=\"wp-block-group is-content-justification-space-between is-nowrap is-layout-flex wp-container-core-group-is-layout-0dfbf163 wp-block-group-is-layout-flex\"><div style=\"font-size:16px;\" class=\"has-text-align-left wp-block-post-author\"><div class=\"wp-block-post-author__content\"><p class=\"wp-block-post-author__name\">The University Network<\/p><\/div><\/div>\n\n\n<div class=\"wp-block-uagb-social-share uagb-social-share__outer-wrap uagb-social-share__layout-horizontal uagb-block-ee584a31\">\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-ec619ce7\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.facebook.com\/sharer.php?u=\" tabindex=\"0\" role=\"button\" aria-label=\"facebook\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M504 256C504 119 393 8 256 8S8 119 8 256c0 123.8 90.69 226.4 209.3 245V327.7h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.28c-30.8 0-40.41 19.12-40.41 38.73V256h68.78l-11 71.69h-57.78V501C413.3 482.4 504 379.8 504 256z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-32d99934\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/twitter.com\/share?url=\" tabindex=\"0\" role=\"button\" aria-label=\"twitter\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 512 512\"><path d=\"M389.2 48h70.6L305.6 224.2 487 464H345L233.7 318.6 106.5 464H35.8L200.7 275.5 26.8 48H172.4L272.9 180.9 389.2 48zM364.4 421.8h39.1L151.1 88h-42L364.4 421.8z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n\n\n\n<div class=\"wp-block-uagb-social-share-child uagb-ss-repeater uagb-ss__wrapper uagb-block-1d136f14\"><span class=\"uagb-ss__link\" data-href=\"https:\/\/www.linkedin.com\/shareArticle?url=\" tabindex=\"0\" role=\"button\" aria-label=\"linkedin\"><span class=\"uagb-ss__source-wrap\"><span class=\"uagb-ss__source-icon\"><svg xmlns=\"https:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 448 512\"><path d=\"M416 32H31.9C14.3 32 0 46.5 0 64.3v383.4C0 465.5 14.3 480 31.9 480H416c17.6 0 32-14.5 32-32.3V64.3c0-17.8-14.4-32.3-32-32.3zM135.4 416H69V202.2h66.5V416zm-33.2-243c-21.3 0-38.5-17.3-38.5-38.5S80.9 96 102.2 96c21.2 0 38.5 17.3 38.5 38.5 0 21.3-17.2 38.5-38.5 38.5zm282.1 243h-66.4V312c0-24.8-.5-56.7-34.5-56.7-34.6 0-39.9 27-39.9 54.9V416h-66.4V202.2h63.7v29.2h.9c8.9-16.8 30.6-34.5 62.9-34.5 67.2 0 79.7 44.3 79.7 101.9V416z\"><\/path><\/svg><\/span><\/span><\/span><\/div>\n<\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n<p>Many nursing students may already be learning with help from artificial intelligence \u2014 they just might not trust it once they know it is there.<\/p>\n\n\n\n<p>In a small pilot study at the University of Cincinnati College of Nursing, doctoral students rated answers from a custom AI chatbot as more helpful and satisfying than responses from a professor or graduate assistant. But when they were asked to guess which answer came from the chatbot, they tended to assume the weakest response was the AI\u2019s.<\/p>\n\n\n\n<p>The pattern points to a growing challenge for colleges: even when AI tools perform well, student skepticism and trust could determine whether they are actually used.<\/p>\n\n\n\n<p>The study, led by Joshua Lambert, an associate professor and biostatistician in the College of Nursing, and <a href=\"https:\/\/journals.healio.com\/doi\/10.3928\/01484834-20260216-01\" target=\"_blank\" rel=\"noopener\" title=\"\">published<\/a> in the Journal of Nursing Education, explored whether a tailored education chatbot could support Doctor of Nursing Practice (DNP) students as they worked through statistics questions tied to their capstone projects.<\/p>\n\n\n\n<p>Seven DNP students submitted their own statistics questions and received three separate answers to each one: one from Lambert, one from a graduate assistant and one from a large language model\u2013based chatbot. The responses were mixed together so students did not know which was which.<\/p>\n\n\n\n<p>\u201cStudents first gave us their questions and then we gave them three responses back in a blinded and randomized fashion so students were unaware which response came from either the professor, graduate assistant or chatbot,\u201d Lambert said in a news release. \u201cThe students ranked each response in terms of helpfulness, overall satisfaction and guessed which of the three responses came from the chatbot.\u201d<\/p>\n\n\n\n<p>Students rated each answer on a five-point scale for helpfulness, overall satisfaction and likelihood that they would use that kind of response in the future. Only after scoring the answers did they try to identify the chatbot.<\/p>\n\n\n\n<p>Across the board, the AI performed well. <\/p>\n\n\n\n<p>\u201cThe students rated the chatbot\u2019s response the highest in terms of overall satisfaction and helpfulness,\u201d Lambert added.<\/p>\n\n\n\n<p>But when students were asked to guess which answer was generated by AI, a different story emerged. They often labeled the lowest-rated response as the chatbot\u2019s, even when it was not.<\/p>\n\n\n\n<p>\u201cStudents preferred the large language model (LLM) chatbot\u2019s responses when blinded yet demonstrated a bias against it when the source was suspected,\u201d added Lambert. \u201cThis bias is likely rooted in a lack of trust, and trust may influence AI adoption by both students and professors.\u201d<\/p>\n\n\n\n<p>That tension \u2014 between performance and perception \u2014 is at the heart of current debates over AI in higher education. Many students already use tools like ChatGPT to clarify concepts, draft outlines or check their understanding. Faculty and administrators, meanwhile, are wrestling with how to harness AI for tutoring and advising while guarding against misinformation, plagiarism and overreliance on automated help.<\/p>\n\n\n\n<p>Lambert\u2019s team designed the study as a randomized, blinded, within-subjects comparison, then used surveys to capture how students felt about each answer. <\/p>\n\n\n\n<p>Co-authors on the project include Robyn Stamm, an associate professor of clinical nursing; Shannon White, an assistant professor in the DNP program; and Melanie Kroger-Jarvis, an associate dean for graduate clinical learning programs, all from UC\u2019s College of Nursing. Bailey Martin, a postdoctoral research fellow at the University of Colorado Anschutz Medical Campus, also contributed.<\/p>\n\n\n\n<p>Because only seven students took part, the researchers stress that the findings are preliminary. Pilot studies like this are meant to test ideas and methods, not to provide definitive answers.<\/p>\n\n\n\n<p>They note that larger, multi-site studies with more students and richer data \u2014 including interviews and detailed surveys \u2014 will be needed to fully understand how AI chatbots might fit into nursing education and academic advising. Future work could look at how different kinds of questions, levels of AI transparency or training on how to use AI safely affect student trust and learning.<\/p>\n\n\n\n<p>The project grew out of a practical classroom problem. Statistics can be intimidating, especially for students who have been away from math-heavy coursework for years. Lambert said he was looking for ways to lower the stakes for asking questions.<\/p>\n\n\n\n<p>Students, like many learners, sometimes hesitate to approach a professor or even a teaching assistant if they fear their question will sound basic or expose a gap in understanding. That reluctance can slow learning and deepen anxiety in technical courses.<\/p>\n\n\n\n<p>An AI chatbot, by contrast, is always available and does not judge. Students can ask the same question multiple times, request simpler explanations or explore related topics without worrying about what an instructor might think.<\/p>\n\n\n\n<p>Education researchers have long argued that lowering the barrier to asking questions is critical for deep learning, especially in challenging subjects like statistics, pharmacology or pathophysiology. AI tools, if designed and supervised carefully, could offer one more way to create that safe space.<\/p>\n\n\n\n<p>At the same time, the UC study suggests that simply dropping a chatbot into a course is not enough. Students\u2019 beliefs about AI \u2014 whether they see it as unreliable, biased, impersonal or threatening \u2014 may override their own positive experiences with the tool\u2019s answers.<\/p>\n\n\n\n<p>That makes transparency and education about AI itself an important part of any rollout. Instructors may need to explain how a chatbot was built, what it can and cannot do, and how its answers are checked. They may also need to model healthy use of AI, showing students how to treat it as a supplement to, not a replacement for, human teaching and critical thinking.<\/p>\n\n\n\n<p>For nursing education in particular, the stakes are high. Nurses must interpret complex data, communicate clearly with patients and colleagues, and make decisions where lives are on the line. Any AI support system must be both accurate and trusted \u2014 and students must be trained to question and verify automated advice.<\/p>\n\n\n\n<p>Lambert\u2019s pilot study does not settle those questions, but it offers an early glimpse of how students actually respond when AI is quietly integrated into their coursework. When they do not know an answer came from a chatbot, they may embrace it. Once they suspect the source, their trust can evaporate.<\/p>\n\n\n\n<p>Understanding and addressing that bias, the researchers argue, will be key to deciding whether AI becomes a powerful ally in the classroom or remains a tool students are reluctant to fully accept.<\/p>\n\n\n\n<div style=\"height:9px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Source: <\/strong><a href=\"https:\/\/www.uc.edu\/news\/articles\/2026\/04\/uc-nursing-students-chatbot-advising-study.html\" target=\"_blank\" rel=\"noopener\" title=\"\">University of Cincinnati<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A new University of Cincinnati study suggests nursing students often prefer AI chatbot answers to a professor\u2019s \u2014 until they realize the response came from a machine. The findings highlight how trust, not just accuracy, may shape the future of AI in higher education.<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"single-no-separators","format":"standard","meta":{"_acf_changed":false,"_uag_custom_page_level_css":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[8],"tags":[280,465],"class_list":["post-35843","post","type-post","status-publish","format-standard","hentry","category-ai","tag-university-of-cincinnati","tag-university-of-colorado-anschutz-school-of-medicine"],"acf":[],"aioseo_notices":[],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"The University Network","author_link":"https:\/\/www.tun.com\/home\/author\/funky_junkie\/"},"uagb_comment_info":0,"uagb_excerpt":"A new University of Cincinnati study suggests nursing students often prefer AI chatbot answers to a professor\u2019s \u2014 until they realize the response came from a machine. The findings highlight how trust, not just accuracy, may shape the future of AI in higher education.","_links":{"self":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/35843","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/comments?post=35843"}],"version-history":[{"count":6,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/35843\/revisions"}],"predecessor-version":[{"id":35992,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/posts\/35843\/revisions\/35992"}],"wp:attachment":[{"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/media?parent=35843"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/categories?post=35843"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tun.com\/home\/wp-json\/wp\/v2\/tags?post=35843"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}