How AI Chatbots and Interactive Apps Impact User Privacy

A Penn State study reveals that interactive apps and AI chatbots increase user engagement but reduce privacy vigilance, raising potential concerns for user data security.

Interactive mobile apps and artificial intelligence chatbots are perceived as more engaging and playful, which leads users to let their guard down and risk their privacy, according to new research from Penn State. The study explored how app interactivity affects users’ vigilance toward privacy risks during the sign-up process, ultimately influencing their attitudes and willingness to continue using the app.

Published in the journal Behaviour & Information Technology, the study reveals that higher interactivity fosters a sense of playfulness while simultaneously lowering privacy concerns. This is significant in an era where mobile apps and AI chatbots are often designed to be fun and engaging, suggesting a potential vulnerability in user data protection.

“I think, in general, there’s been an increase in the extent to which apps and AI tools pry into user data — ostensibly to better serve users and to personalize information for them,” senior author S. Shyam Sundar, an Evan Pugh University Professor and the James P. Jimirro Professor of Media Effects at Penn State, said in a news release. “In this study, we found that interactivity does not make users pause and think, as we would expect, but rather makes them feel more immersed in the playful aspect of the app and be less concerned about privacy. Companies could exploit this vulnerability to extract private information without users being totally aware of it.”

The researchers conducted an online experiment with 216 participants, who were asked to navigate the sign-up process for a simulated fitness app. The study varied the levels of two types of interactivity: “message interactivity” (ranging from simple Q&A to complex, connected conversations) and “modality interactivity” (such as clicking and zooming on images).

The participants rated their experience on a seven-point scale based on statements like “I felt using the app is fun” and “I would be concerned that the information I submitted to the app could be misused.” The analysis showed that interactivity not only boosted perceived playfulness and engagement but also reduced privacy concerns.

“Nowadays, when users engage with AI agents, there’s a lot of back-and-forth conversation, and because the experience is so engaging, they forget that they need to be vigilant about the information they share with these systems,” added lead author Jiaqi Agnes Bao, an assistant professor of strategic communication at the University of South Dakota who completed the study as a doctoral candidate at Penn State. “We wanted to understand how to better design an interface to make sure users are aware of their information disclosure.”

The study suggests that while user awareness is crucial, app and AI developers can create designs that balance playfulness and privacy. Bao highlighted the potential of combining message and modality interactivity to encourage users to reflect on the information they share.

“We found that if both message interactivity and modality interactivity are designed to operate in tandem, it could cause users to pause and reflect,” Bao added. “So, when a user converses with an AI chatbot, a pop-up button asking the user to rate their experience or leave comments on how to improve their tailored responses can give users a pause to think about the kind of information they share with the system and help the company provide a better customized experience.”

Co-author Yongnam Jung, a doctoral candidate at Penn State, emphasized the broader responsibility of AI platforms in protecting user privacy.

“It’s not just about notifying users, but about helping them make informed choices, which is the responsible way for building trust between platforms and users,” Jung added.

This research builds on the team’s earlier work and underscores a critical trade-off: while interactivity enhances the user experience and highlights app benefits, it also diverts attention from potential privacy risks.

Sundar, who also serves as the director of Penn State’s Center for Socially Responsible Artificial Intelligence (CSRAI), pointed out that the study challenges current design thinking about the cognitive alertness induced by conversation-based interactivity compared to modalities like clicking and swiping.

“In reality, conversation-based tools are turning out to be a playful exercise, and we’re seeing this reflected in the larger discourse on generative AI where there are all kinds of stories about people getting so drawn into conversations that they do things that seem illogical,” added Sundar. “They are following the advice of generative AI tools for very high-stakes decision making. In some ways, our study is a cautionary tale for this newer suite of generative AI tools. Perhaps inserting a pop-up or other modality interactivity tools in the middle of a conversation may stem the flow of this mesmerizing, playful interaction and jerk users into awareness now and then.”

Source: The Pennsylvania State University