AI-Generated Content Poses Triple Threat to Reddit Moderators

A new study led by Cornell University spotlights the challenges Reddit moderators face as AI-generated content proliferates, threatening the forum’s authenticity, social value and manageability.

As artificial intelligence continues to evolve, its growing presence on Reddit, a platform known for its human-centric content, is becoming a contentious issue. This disruption is analyzed in a new study led by Cornell University researchers. The study paints a complex picture of how AI-generated content is affecting the very essence of Reddit’s communities.

Lead author Travis Lloyd, a doctoral student in the field of information science, conveyed the nuanced concerns from moderators overseeing some of Reddit’s most influential boards.

“They were concerned about it on three levels: decreasing content quality, disrupting social dynamics and being difficult to govern,” Lloyd said in a news release. “And to respond to this, they were enacting rules in their communities, which set norms, but they also then had to enforce those rules, which is challenging.”

Threefold Concerns

The study, which is being presented at the ACM SIGCHI Conference on Computer-Supported Cooperative Work and Social Computing in Bergen, Norway, Oct. 18-22, explores the complex implications AI content poses for online communities. This work has earned an honorable mention for best paper.

The research highlights three primary concerns identified by moderators:

1. Content Quality: One moderator pointed out that AI-generated posts often fail to match the depth and substance of human-generated content, laden with glaring errors in style and accuracy. This decline in content quality is a significant concern.

2. Social Dynamics: The infiltration of AI disrupts the meaningful, human-to-human interactions that are the cornerstone of Reddit’s community values. Moderators fear this could lead to strained relationships and diluted human connection.

3. Governance Challenges: Enforcing community norms and rules against AI-generated content is an increasing burden for the volunteer moderators.

“It remains a huge question of how they will achieve that goal,” added senior author Mor Naaman, the Don and Mibs Follett professor of information science at Cornell Tech, the Jacobs Technion-Cornell Institute, and the Cornell Ann S. Bowers College of Computing and Information Science. “A lot of it will inevitably go to the moderators, who are in limited supply and are overburdened.”

In a bid to preserve Reddit’s authenticity, these volunteers are crafting and enforcing guidelines to counteract AI’s pervasive influence. However, the sustainability of this approach remains uncertain.

Broader Implications

The advent of AI in user-generated content platforms like Reddit underscores a pressing need for multi-stakeholder intervention.

“Reddit, the research community, and other platforms need to tackle this challenge, or these online communities will fail under the pressure of AI,” Naaman added, highlighting the broader ramifications of this phenomenon.

This rigorous research commenced in 2023, a year after the launch of ChatGPT, marking a pivotal moment as AI-generated content began to flood various digital fora.

The study engaged 15 moderators from subreddits with explicit rules regarding AI usage, managing communities ranging from small groups of 10 to massive collectives of over 32 million members.

Joseph Reagle, an associate professor of communication studies at Northeastern University, co-authored the study.

Source: Cornell University