5 Essential Strategies to Build Trust in AI Technology

CU Boulder researchers unveil a framework for enhancing trust in AI technology, highlighting five critical strategies. These insights aim to foster greater adoption of AI systems.

As autonomous taxis roll out across the country, the University of Colorado Boulder is spearheading research to ensure artificial intelligence systems are trustworthy and widely accepted. Leading this initiative is Amir Behzadan, a professor in the Department of Civil, Environmental and Architectural Engineering, and a fellow in the Institute of Behavioral Science (IBS) at CU Boulder.

Behzadan and his team at the Connected Informatics and Built Environment Research (CIBER) Lab have created a framework for the development of AI tools that benefit people and society.

And in a new paper published in the journal AI and Ethics, Behzada and his doctoral student Armita Dabiri relied on that framework to design a conceptual AI tool that integrates key elements of trustworthiness.

“As a human, when you make yourself vulnerable to potential harm, assuming others have positive intentions, you’re trusting them,” Behzadan said in a news release. “And now you can bring that concept from human-human relationships to human-technology relationships.”

Understanding Trust in AI

Behzadan explores how human trust can be built into AI systems used in everyday scenarios, including self-driving cars, smart home devices and public transportation apps. Trust is a pivotal factor in the adoption and reliance on these technologies, he noted.

Historically, trust has been foundational to human cooperation and societal advancement, Behzadan observed. This trust-mistrust dynamic is evident today in our attitudes towards AI, particularly when its development appears opaque or driven by distant corporations or governments.

Five Strategies to Build Trust in AI

The researchers identified five key elements that contribute to making AI tools more trustworthy:

1. User-Centric Design

Trust in AI varies individually, influenced by personal experiences, cultural norms and technological familiarity.

Behzadan emphasized that understanding the target users’ characteristics is crucial for designing trustworthy AI. Examples include voice assistants that adapt interfaces for older adults.

2. Reliability, Ethics and Transparency

A trustworthy AI system must function consistently, ensure user safety, protect privacy and operate transparently.

Behzadan pointed out that incorporating ethical standards and eliminating biased operations are vital. Transparency in data usage and decision-making processes increases user confidence.

3. Context Sensitivity

AI tools must be attuned to the specific context they serve.

In their study, Behzadan and Dabiri proposed an AI-based tool, “PreservAI,” to aid in the collaborative maintenance of historical buildings by considering diverse stakeholder inputs and priorities.

4. Ease of Use and Feedback Mechanisms

Usability is essential for trust. AI systems should engage users positively and provide ways for users to report issues and offer feedback.

“Even if you have the most trustworthy system, if you don’t let people interact with it, they are not going to trust it. If very few people have really tested it, you can’t expect an entire society to trust it and use it,” Behzadan explained.

5. Adaptability to Rebuild Trust

Trust in AI can fluctuate. For instance, if a self-driving taxi is involved in an accident, users may lose trust.

However, improving and adapting the technology can help rebuild confidence. Behzadan illustrated this with Microsoft’s chatbot “Zo,” which gained trust through improved design after the failure of its predecessor, “Tay.”

Moving Forward

While AI systems do entail risks, proper trust-building can foster their development and societal acceptance. When users are willing to engage and share data with AI, these systems become more accurate and useful.

“When people trust AI systems enough to share their data and engage with them meaningfully, those systems can improve significantly, becoming more accurate, fair, and useful,” Behzadan added. “Trust is not just a benefit to the technology; it is a pathway for people to gain more personalized and effective support from AI in return.”

Source: University of Colorado Boulder