New Blueprint Shows Governments How to Govern Responsibly With AI

As governments rush to adopt artificial intelligence, an international team of experts has drafted a practical blueprint to help public agencies use AI to improve services without sacrificing trust, accountability or democratic values.

As artificial intelligence moves deeper into government, an international team of experts is urging public leaders to slow down, plan carefully and put people and public trust at the center of every AI decision.

Their new global policy brief, “Governing with AI: Four Actions to Build a Transformative and Resilient Public Administration in the Age of AI,” lays out a practical blueprint for how governments can use AI to improve services and policymaking without amplifying existing problems or eroding confidence in public institutions.

Around 70% of countries already report using AI to streamline internal government processes, and about a third are using it to help design and implement policy. Some are even exploring AI as a substitute for core government functions. At the same time, more than 80% of AI projects fail, underscoring how difficult it is to move from pilot projects to real-world impact.

The brief, led by Catherine Régis of IVADO and Université de Montréal and Florian Martin-Bariteau of the University of Ottawa, analyzes why AI projects in the public sector succeed or fall short and distills that experience into four concrete courses of action for policymakers.

Régis, director of social innovation and international policy at IVADO and professor of law at Université de Montréal noted that governments need to balance urgency with caution as they adopt AI.

“Governments deciding to use AI should go slow and steady, while being ambitious from the start. This should not be seen as indecision, but rather as a mark of seriousness and responsibility,” she said in a news release.

The authors argue that the success of AI in government depends less on cutting-edge algorithms and more on the strength of institutions and governance around them: internal capacity, clear accountability, balanced relationships with private vendors and serious planning for resilience when things go wrong.

They recommend four main actions for public administrations:

First, redesign public services around real problems before deploying AI. Instead of starting with a tool and looking for a use, agencies should begin by mapping the actual needs of residents and public servants. The brief urges governments to involve public employees as co-designers, building on proven successes and scaling up what already works. That approach can help ensure AI tools are solving real issues rather than adding complexity or automating flawed processes.

Second, invest in institutional capacity. Many public agencies lack the in-house skills to evaluate AI systems, manage data responsibly or oversee complex technology contracts. The brief calls for training programs and cross-functional teams that bring together technologists, policy experts, legal specialists and front-line staff. Building this internal expertise is key to avoiding overreliance on vendors and to making informed decisions about when and how to use AI.

Third, rebalance power with the private sector. Governments often depend on large technology companies for AI tools and infrastructure, which can leave public agencies with limited bargaining power and little control over how systems are designed or updated. The experts recommend strategies such as collective procurement and collaboration across jurisdictions to create and share AI tools that meet public requirements. That can help ensure public values, not just commercial interests, shape the technology.

Fourth, anchor public-sector AI in a “trust stack” built on transparency, accountability, oversight and resilience. That means making AI systems and their purposes understandable to the public, setting up clear lines of responsibility when things go wrong, and planning for failures or unintended consequences in advance.

Martin-Bariteau, director of the AI + Society Initiative and associate professor of law at the University of Ottawa, emphasized the importance of starting from the ground up.

“Bottom-up, problem-driven planning is the only credible way to transform an administration with AI,” he said in the news release. “Without planning, transparency, accountability and oversight, AI in the public sector will only amplify current dysfunctions and feed distrust from public servants and populations.”

Canada offers a glimpse of both the promise and the complexity of this transition. The federal government recently used an AI platform to translate and summarize more than 11,000 submissions from a public consultation on its AI strategy, and it has proposed an ambitious rollout of AI across the public service. Experiences like this show how AI can help governments handle massive amounts of information, but they also highlight the need for clear rules, safeguards and public engagement.

The new brief is part of the Global Policy Briefs on AI initiative, a joint effort by IVADO, Canada’s leading AI research and knowledge mobilization consortium, and the AI + Society Initiative at the University of Ottawa. The initiative aims to give policymakers rigorous, actionable guidance on how to address major global challenges related to AI.

This is the second outcome of the series, following an earlier brief that focused on protecting democracies in the age of AI. The latest document was developed in December 2025 during a week-long policy retreat that brought together AI experts from North America, South America, Africa, Europe and Asia, reflecting the global nature of the questions at stake.

As more governments experiment with AI to deliver services, manage data and shape policy, the stakes are high. Used well, AI could help public institutions become more responsive, efficient and equitable. Used poorly, it could deepen existing inequalities, obscure decision-making and weaken democratic accountability.

The authors are betting that careful planning, strong institutions and a focus on real-world problems can tip the balance toward the first outcome. Their message to policymakers is that governing with AI is not just a technical challenge, but a chance to rethink how public administrations serve people in a digital age.

Source: University of Ottawa