New Study Unveils Similarities Between Human and AI Learning Mechanisms

A new study from Brown University reveals parallels between human and artificial intelligence learning processes, potentially revolutionizing AI development and enhancing our understanding of human cognition.

A new study from Brown University has uncovered striking similarities between how humans and artificial intelligence systems learn, providing fresh insights into human cognition and paving the way for the development of more intuitive AI tools.

The research, led by postdoctoral research associate Jake Russin, demonstrates that both humans and AI systems integrate two distinct learning modes similarly.

Published in the Proceedings of the National Academy of Sciences, the study focused on the interaction between flexible, in-context learning and incremental learning. These findings suggest that AI and human brains share intriguing parallels in their learning processes.

“These results help explain why a human looks like a rule-based learner in some circumstances and an incremental learner in others,” Russin said in a news release. “They also suggest something about what the newest AI systems have in common with the human brain.”

Russin’s work is interdisciplinary, bridging machine learning and computational neuroscience. He holds a joint appointment in the laboratories of Michael Frank, a professor of cognitive and psychological sciences, and Ellie Pavlick, an associate professor of computer science.

Human learning typically occurs in one of two ways. For straightforward tasks like learning game rules, individuals use quick, in-context learning. More complex skills, such as playing a musical instrument, rely on gradual, incremental learning.

While it was known that humans and AI could integrate both learning forms, the exact mechanism was unclear. Russin developed a theory that draws parallels between these learning types and the human brain’s working memory and long-term memory.

To test this theory, Russin employed “meta-learning,” a method where AI systems learn about learning itself. He discovered that AI’s ability to perform in-context learning improved after extensive incremental learning experience.

One pivotal experiment adapted from human studies involved testing the AI’s capacity for in-context learning by recombining familiar ideas to address new situations. After being exposed to 12,000 similar tasks, the AI system successfully identified new combinations it hadn’t encountered before, such as recognizing a “green giraffe.”

The study found that both humans and AI improve their ability to learn quickly and flexibly after a period of incremental learning.

“At the first board game, it takes you a while to figure out how to play,” added Pavlick. “By the time you learn your hundredth board game, you can pick up the rules of play quickly, even if you’ve never seen that particular game before.”

The researchers also noted a balance between learning retention and flexibility: the harder an AI system worked to complete a task, the better it remembered how to perform it in the future. According to Frank, this aligns with human learning patterns where making errors helps solidify information in long-term memory.

Frank, who specializes in computational models to understand human learning, emphasized the broader implications.

“Our results hold reliably across multiple tasks and bring together disparate aspects of human learning that neuroscientists hadn’t grouped together until now,” he said.

The insights from this study are also critical for developing trustworthy AI tools, especially in sensitive fields like mental health.

“To have helpful and trustworthy AI assistants, human and AI cognition need to be aware of how each works and the extent that they are different and the same,” Pavlick added. “These findings are a great first step.”

Source: Brown University