A new University of Rochester study suggests echo chambers are not inevitable online. Small tweaks that add randomness to feeds could make people more open to differing views.
Scroll long enough on any social media app and your feed starts to feel eerily familiar. Posts echo your opinions, reinforce your politics and rarely push back on what you already believe.
That pattern is not just a quirk of the internet. It is the product of recommendation systems that learn what you like and then keep serving it up, over and over. But a new study from the University of Rochester suggests this echo chamber effect is not inevitable — and that a relatively simple design change could help loosen its grip.
The research, published in the journal IEEE Transactions on Affective Computing, argues that echo chambers are partly a choice built into how platforms are designed, not a fixed feature of online life. By adding a bit more unpredictability into what people see, the team found, social networks could make users more open to different perspectives.
The interdisciplinary group was led by Ehsan Hoque, a professor in the Department of Computer Science, and included collaborators from physics, political science and data science. Their central question: How rigid do people’s beliefs become after using social media, and can changing the way content is recommended make those beliefs more flexible?
To find out, the researchers created simulated social media channels and asked 163 participants to use them. The feeds were designed in two main ways. Some mimicked traditional social media, where algorithms heavily prioritize content similar to what a user has already engaged with. Others built in more variety, introducing posts and connections that users had not explicitly chosen.
The team then measured how participants responded to statements on topics such as climate change after spending time in these different environments. They were especially interested in belief rigidity — how strongly people clung to their initial views after repeated exposure to certain kinds of content.
In everyday platforms, personalization tends to follow a simple logic: if you pause on or like a post, the system shows you more of the same. That can quickly turn into a loop of familiar opinions and one-sided information.
The Rochester team explored what happens when that loop is disrupted by what they call randomness. In this context, randomness does not mean irrelevant or nonsensical content. It means loosening the tight link between past behavior and future recommendations so that users occasionally encounter viewpoints and connections outside their usual bubble.
The experiments confirmed that repeated exposure matters, according to first author Adiba Mahbub Proma, a computer science doctoral student.
“Across a series of experiments, we find that what people see online does influence their beliefs, often pulling them closer to the views they are repeatedly exposed to,” she said in a news release.
However, the story changed when the recommendation system was adjusted.
“But when algorithms incorporate more randomization, this feedback loop weakens. Users are exposed to a broader range of perspectives and become more open to differing views,” Proma added.
The researchers argue that current recommendation systems can steer people into echo chambers that make divisive or extreme content more appealing. The more a user interacts with a narrow slice of content, the more the system feeds it back, deepening polarization.
As an antidote, they recommend modest design changes that keep personalization but deliberately widen the range of what appears in a feed. That could mean periodically inserting posts from outside a user’s usual network, surfacing less popular but relevant viewpoints or highlighting connections that cut across ideological lines — all without taking away user control.
The findings arrive as governments, tech companies and the public wrestle with online misinformation, declining trust in institutions and sharp divides over elections and public health. While many proposals focus on content moderation or fact-checking, this work points to the underlying architecture of feeds as another powerful lever.
The researchers also see a role for individual users. Proma suggests that people think critically about how comfortable their feeds feel.
“If your feed feels too comfortable, that might be by design,” she said.
Rather than relying entirely on algorithms, she encourages users to take an active role in shaping what they see. Her advice is straightforward.
“Seek out voices that challenge you,” she said. “The most dangerous feeds are not the ones that upset us, but the ones that convince us we are always right.”
The study does not claim that randomness alone can fix polarization or misinformation. But it shows that even small shifts in how content is recommended can weaken the self-reinforcing loops that harden beliefs. That insight could guide future platform designs, regulatory discussions and digital literacy efforts.
Next steps could include testing similar design tweaks on real-world platforms, exploring how different communities respond to increased variety in their feeds and examining how these changes interact with other efforts to promote healthier online discourse.
For now, the work offers a hopeful message: echo chambers are not destiny. With thoughtful design — and a bit more randomness — social media could become a place where beliefs evolve instead of just echo.
Source: University of Rochester

