A Cornell University study uncovers the tendency of AI writing assistants to homogenize language, posing risks to cultural uniqueness, particularly impacting writers in the Global South.
Artificial intelligence, often hailed for its potential to revolutionize various industries, may carry unintended consequences for linguistic diversity. A new study from Cornell University has revealed that AI-based writing assistants tend to generate generic language, making global users, especially those in the Global South, sound more like Americans.
The study highlighted a significant impact on cultural expression, revealing that while these assistants help users write faster, they also homogenize writing styles.
“This is one of the first studies, if not the first, to show that the use of AI in writing could lead to cultural stereotyping and language homogenization,” senior author Aditya Vashistha, an assistant professor of information science at the Cornell Ann S. Bowers College of Computing and Information Science and faculty lead of Cornell’s Global AI Initiative, said in a news release. “People start writing similarly to others, and that’s not what we want. One of the beautiful things about the world is the diversity that we have.”
The research was presented at the Association of Computing Machinery’s conference on Human Factors in Computing Systems on April 28 in Yokohama, Japan, by first author Dhruv Agarwal, a doctoral student in information science.
The study underscores a critical issue as AI tools like ChatGPT, primarily developed by U.S. tech companies, gain global usage, including in regions home to 85% of the world’s population.
The research team recruited 118 participants, split evenly between the United States and India, to write about cultural topics. Half of the participants used AI writing assistants that provided autocomplete suggestions, while the other half completed the task independently.
The results showed that Indian participants were more likely to accept the AI’s suggestions, with a 25% acceptance rate compared to 19% for American users. However, Indian users frequently modified these suggestions to better fit their context, reducing the overall productivity boost.
For instance, when tasked with writing about favorite foods or holidays, AI assistants suggested typically American choices like pizza and Christmas, even for Indian users. When Indians attempted to write about public figures, the AI often suggested Western celebrities over local icons. This led to frustration and constant corrections that underscored the tools’ lack of cultural sensitivity.
“When Indian users use writing suggestions from an AI model, they start mimicking American writing styles to the point that they start describing their own festivals, their own food, their own cultural artifacts from a Western lens,” added Agarwal.
The researchers argue that this trend is a form of AI colonialism, where Western cultural norms are imposed on non-Western users, potentially altering not just how they write, but also how they think.
“These technologies obviously bring a lot of value into people’s lives,” Agarwal added, “but for that value to be equitable and for these products to do well in these markets, tech companies need to focus on cultural aspects, rather than just language aspects.”
This study is a clarion call for tech companies to account for cultural nuances when developing AI tools to ensure equitable benefits for users worldwide.
Source: Cornell University