A major international study finds that trust in AI depends less on technical performance and more on whether people feel connected, included and supported when using it. The results point to a future where human-centered design and governance are key to building trustworthy AI.
Trust in artificial intelligence is not just about how well the technology works. It is also about whether people feel connected, supported and included when they use it.
A new global study led by researchers at Tampere University in Finland finds that positive attitudes toward AI and a sense of relatedness when using technology are the strongest and most consistent predictors of trust in AI systems and the companies that build them.
The team took a socio-psychological approach, looking beyond code and algorithms to basic human needs. They examined how feelings of relatedness, autonomy and competence, along with attitudes toward AI and confidence in using it, shape people’s willingness to trust AI.
Across 12 countries on six continents, one pattern stood out: people who felt more positive about AI and more socially connected when using technology were more likely to trust both AI tools and AI-driven companies. In contrast, feeling technically skilled, in control or highly self-confident with AI mattered only in some national contexts.
First author Anica Cvetkovic, a doctoral researcher at Tampere University, emphasized how deeply AI is now woven into everyday life.
“As AI systems increasingly mediate how people work, communicate and access information, trust is no longer just about whether a technology functions correctly,” she said in a news release.
The study drew on survey data from 11,259 participants in 2024, making it one of the most wide-ranging examinations to date of how trust in AI develops around the world. The findings, published in the journal Behaviour & Information Technology, highlight that trust is as much a social and psychological issue as a technical one.
Cvetkovic and her colleagues did not just ask about AI in the abstract. They also examined trust in major technology companies, including social media platforms that rely heavily on AI. This dual focus shows how closely public views of corporate behavior and AI technology are now intertwined.
When people think about AI, they are often also thinking about the companies that design and deploy it, from recommendation engines on social media to automated systems that shape what news, entertainment or services they see. If those interactions feel fair, inclusive and respectful, trust can grow. If they feel opaque, manipulative or disempowering, trust can erode quickly.
By including participants from regions with very different levels of digital infrastructure and distinct cultural norms, the research also sheds light on global inequalities in AI development and governance. People’s everyday experiences with technology — whether they feel empowered or excluded — appear to play a crucial role in shaping trust.
Atte Oksanen, a professor of social psychology at Tampere University and one of the lead researchers, stressed how high the stakes have become.
“Trust in artificial intelligence, and particularly in the companies developing these systems, is becoming increasingly important,” he said in the news release.
Oksanen pointed out that AI is now central to work, communication and access to key services, and that geopolitical shifts have raised strategic questions for regions like Europe.
“AI now influences how we work, communicate and access essential services. Recent changes in global politics have also underlined the need for Europe to develop strong and reliable alternatives of its own. Ensuring trustworthy and transparent development is therefore not only a technological priority, but also a strategic one for our societies,” he said.
The results support a human-centered and culturally sensitive approach to AI design and regulation. Simply making systems more accurate or training users to be more technically capable is not enough if people still feel disconnected, powerless or ignored when they interact with AI.
Instead, the researchers argue, designers and policymakers need to focus on how AI systems support people’s basic psychological needs: Do users feel respected and heard? Do they understand what the system is doing and why? Do they feel that AI is working with them, not just on them?
These questions are especially pressing for students and young professionals, who increasingly encounter AI in classrooms, job applications, health apps and social platforms. If those experiences feel inclusive and supportive, they can build confidence and trust. If not, they may deepen skepticism and resistance.
Oksanen underlined that trust must rest on more than speed and accuracy.
“If AI is to be accepted as part of everyday life and public institutions, trust must be built on more than efficiency. Understanding how people relate to AI – and to the companies that develop it – is essential for the legitimacy of AI‑driven societies,” he said.
Looking ahead, the findings suggest several directions for action. For developers, it means involving diverse users early in the design process, explaining systems clearly and giving people meaningful choices about how AI affects them. For policymakers, it means crafting regulations that promote transparency, accountability and fairness, while recognizing that cultural context matters.
For universities and educators, the study highlights the importance of teaching not just technical skills, but also critical understanding of how AI shapes social life — and how to advocate for systems that are inclusive and trustworthy.
As AI continues to spread into nearly every sector, this research offers a clear message: building trust is not only a matter of better technology. It is about recognizing and respecting the human beings on the other side of the screen.
Source: Tampere University

