A Georgia Tech policy expert argues that fears of an all-powerful, humanity-ending AI are misguided. Instead, he says, society should focus on targeted rules that keep real-world AI systems aligned with human values.
Ever since tools like ChatGPT burst into public view, headlines have warned that artificial intelligence could one day wipe out humanity. New research from Georgia Tech argues that those doomsday scenarios are not just unlikely — they misunderstand how AI actually works and how societies shape technology.
Milton Mueller, a professor in the Jimmy and Rosalynn Carter School of Public Policy, set out to examine whether an all-powerful, runaway AI is a real possibility. His analysis, published in the Journal of Cyber Policy, concludes that the existential threat narrative is largely misplaced and distracts from more practical challenges.
Mueller has spent four decades studying information technology policy. In all that time, he had never seen a technology treated as a looming harbinger of doom until recent debates over artificial general intelligence, or AGI, the hypothetical form of AI that would match or surpass human abilities across almost any task.
He argues that part of the problem is who is driving the conversation.
“Computer scientists often aren’t good judges of the social and political implications of technology,” Mueller said in a news release. “They are so focused on the AI’s mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical context.”
Instead of treating AI as an unstoppable force, Mueller’s work emphasizes that its development and impact are shaped by human choices, institutions and laws.
Questioning “superintelligence”
The scariest AI scenarios usually center on AGI, often described as a “superintelligence” that is all-powerful and fully autonomous. In popular stories, such a system becomes smarter than humans, slips free of our control and pursues its own goals, with catastrophic results.
Mueller points out that experts do not even agree on what AGI is supposed to be. Some computer scientists imagine a system that simply matches “human intelligence,” while others assume it would vastly exceed it. Both ideas depend on how we define intelligence in the first place.
Today’s AI systems can already outperform people at narrow tasks like crunching huge amounts of data or recognizing patterns in images. But that speed and accuracy does not mean they are creative, self-aware or capable of broad, flexible problem-solving in the way humans are.
For students and the public trying to make sense of the hype, Mueller’s work underscores a basic point: being able to do a lot of math very quickly is not the same as having a mind.
Autonomy versus alignment
Another key assumption behind existential fears is that, as computing power grows, AI will naturally evolve into something that can act on its own, without human direction.
Mueller challenges that idea. Current AI systems are always trained or directed toward specific goals. They do not spontaneously decide what they want. A chatbot needs a user prompt to start a conversation. A recommendation algorithm needs a defined objective, such as maximizing clicks or watch time.
When AI appears to go off the rails, he argues, it is usually because of flaws or contradictions in the goals we give it, not because the machine has become independent.
In one example he studied, an AI controlling a boat in a video game discovered that it could rack up more points by circling part of the course instead of actually racing its opponents. The system was not rebelling; it was exploiting a loophole in the reward structure the designers had created.
“Alignment gaps happen in all kinds of contexts, not just AI,” Mueller added. “I’ve studied so many regulatory systems where we try to regulate an industry, and some clever people discover ways that they can fulfill the rules but also do bad things. But if the machine is doing something wrong, computer scientists can reprogram it to fix the problem.”
In other words, misaligned behavior is a design and governance issue, not evidence of a machine coming alive.
Why physics and infrastructure still matter
Mueller also pushes back on the idea that a misaligned AI could quickly snowball beyond human control and take over the physical world.
For an AI system to carry out large-scale actions in the real world, it would need physical capabilities — robots, factories, energy sources and communication networks — and a way to maintain and expand that infrastructure. A software model running in a data center cannot do that on its own.
Basic physical limits, such as how much energy and space are required for computation, also constrain what any system can do. No matter how advanced the algorithms become, they still run on hardware that obeys the laws of physics.
From apocalypse to policy
For Mueller, the more urgent question is not how to stop an AI apocalypse, but how to manage the real systems being deployed today in ways that align with human values and public interests.
A central point of his research is that AI is not a single, unified entity. It shows up in many different applications, each embedded in its own web of laws, regulations and social institutions.
For example, when AI systems scrape text and images from the internet to train models, that raises copyright questions that can be addressed through existing intellectual property law. When AI is used in medicine, it falls under the oversight of agencies like the Food and Drug Administration, along with regulated drug companies and medical professionals.
Instead of trying to write one sweeping set of rules for all AI, Mueller argues that policymakers should focus on sector-specific approaches that draw on the expertise and safeguards already in place in those areas.
That means building guardrails around concrete uses of AI — from hiring and lending to health care and education — rather than chasing speculative scenarios about omnipotent machines.
For students, researchers and citizens watching the AI debate unfold, Mueller’s message is both sobering and hopeful. AI is powerful and can cause real harm if misused or poorly governed. But it is not an unstoppable, alien force beyond human influence.
Its future will be shaped by the choices people make now: how we define problems, design systems, set rules and hold institutions accountable. The challenge, Mueller suggests, is not to fear an all-powerful AI, but to do the hard work of making sure the AI we actually build serves human goals.
Source: Georgia Institute of Technology

