New Study Reveals How Personalized Algorithms Impair Learning and Skew Reality

New research from The Ohio State University shows that personalized algorithms, prevalent on social media, can narrow users’ focus, leading to decreased learning and increased overconfidence in incorrect information.

Personalized algorithms, which curate online content based on users’ previous choices on platforms like YouTube, may hinder learning and create distorted perceptions of reality, according to new research from The Ohio State University.

Published in the Journal of Experimental Psychology: General, the study found that when an algorithm controlled the information shown to participants learning about a new topic, their focus narrowed to a limited subset of available information. Consequently, these participants frequently answered questions inaccurately but exhibited high confidence in their incorrect answers.

The findings are concerning, according to Giwon Bahg, who led the study as part of his doctoral dissertation in psychology.

Many studies on personalized algorithms explore their impact on shaping opinions on familiar political or social issues.

“But our study shows that even when you know nothing about a topic, these algorithms can start building biases immediately and can lead to a distorted view of reality,” Bahg, who is now a postdoctoral scholar at Pennsylvania State University, said in a news release.

The findings suggest that many people may readily use the limited information provided by personalized algorithms to form broad, overarching conclusions, according to co-author Brandon Turner, a professor of psychology at Ohio State.

“People miss information when they follow an algorithm, but they think what they do know generalizes to other features and other parts of the environment that they’ve never experienced,” Turner said in the news release.

To illustrate, the study provided a hypothetical scenario: A person unfamiliar with movies from a specific country uses a streaming service’s algorithmic recommendations. Initially selecting an action-thriller film, the algorithm subsequently suggests more films of the same genre, causing the viewer to develop a limited, biased understanding of that country’s film industry and broader culture.

The researchers tested these effects using a fictional experimental setup with 346 participants, who were asked to learn about categories of crystal-like aliens by sampling different features.

The participants had two conditions: one involving random sampling of all features, and the other where a personalization algorithm selected features to prioritize.

The results showed that participants relying on the algorithm sampled fewer, selective features and were overly confident in their limited, often incorrect, categorizations.

“They were even more confident when they were actually incorrect about their choices than when they were correct, which is concerning because they had less knowledge,” added Bahg.

Turner pointed to the potential societal consequences, particularly for young learners.

“If you have a young kid genuinely trying to learn about the world, and they’re interacting with algorithms online that prioritize getting users to consume more content, what is going to happen?” Turner added. “Consuming similar content is often not aligned with learning. This can cause problems for users and ultimately for society.”

Source: The Ohio State University