New Study Reveals How Your AI Queries Impact Carbon Emissions

A recent study by German researchers indicates that AI queries can vary drastically in their carbon emissions, depending on the complexity of the tasks. Discover the crucial balance between accuracy and sustainability.

In an era increasingly reliant on artificial intelligence, a new study by German researchers has revealed a surprising environmental cost: some AI prompts can lead to 50 times more CO₂ emissions than others.

Researchers from the Hochschule München University of Applied Sciences measured the carbon emissions of 14 different large language models (LLMs) using a standardized set of 1,000 questions.

Their findings, published in Frontiers in Communication, shed new light on the hidden ecological footprint of AI technologies.

“The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,” first author Maximilian Dauner, a researcher at Hochschule München, said in a news release. “We found that reasoning-enabled models produced up to 50 times more CO₂ emissions than concise response models.”

Understanding the Emissions

The study analyzed models that varied in parameters from seven to 72 billion, focusing on how these parameters influenced carbon outputs. Parameters determine how LLMs learn and process information.

The researchers found that reasoning-based models, which require additional “thinking” tokens to generate responses, greatly increased carbon emissions. On average, these models produced 543.5 tokens per question, whereas concise models required just 37.7 tokens. 

“Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” added Dauner. “None of the models that kept emissions below 500 grams of CO₂ equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.”

Impact on Different Subjects

The study also highlighted that the complexity of the subject matter affects emissions.

Questions on abstract algebra or philosophy, which necessitate lengthy reasoning processes, were found to generate up to six times more CO₂ emissions than simpler subjects like high school history.

Future Implications

The researchers hope their work will guide users toward making more eco-conscious decisions when using AI.

“Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,” Dauner added.

Choice of AI model also plays a significant role.

For instance, the study found that the DeepSeek R1 model (70 billion parameters) emits CO₂ equivalent to a round-trip flight from London to New York when answering 600,000 questions.

In contrast, the Qwen 2.5 model (72 billion parameters) can answer nearly three times as many questions with the same carbon footprint.

However, the researchers acknowledged some limitations, noting that emission factors could vary based on the hardware and energy grids used, possibly affecting the generalizability of their results.

Conclusion

“If users know the exact CO₂ cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies,” Dauner concluded.

The study underscores a growing need for awareness about the environmental impacts of our digital habits and urges a balanced approach to leveraging AI’s powerful capabilities responsibly.

Source: Frontiers