What Does AI Cost Us Environmentally? Cities Could Demand Answers

12 10 2025 | 16:44 The Energy Mix

Climate conscious consumers and governments alike want to embrace artificial intelligence, but its true environmental toll remains largely hidden.

We’ve seen the memes and headlines comparing ChatGPT usage to a typical Google search, and the recent Google disclosure that a median Gemini query consumes 0.24 watt-hours of electricity, “the equivalent of running a standard microwave for about one second.” But they offer only a limited glimpse into what tech companies know about the energy their tools demand. Despite the chatter, making clear sense of this landscape is a challenge for consumers and resource-constrained local governments aiming to make environmentally sound choices while embracing innovation.

Many cities are rightly excited to explore how AI can improve their operations—from using machine learning to support asset management in Montreal, to cracking down on short-term rental bylaw compliance in London, Ontario. But when AI’s physical infrastructure, like new data centres, hits the ground at the local level, it can create new pressures. Cities must weigh an AI project’s benefits against its environmental costs, as was the case when Dublin rejected a Google data centre proposal over concerns the city lacked enough energy to power it.

Broadly speaking, AI consumes energy and water in two main ways: training large models and generating outputs, or “inference”. Without offering much detail, tech firms say training a large language model requires a massive amount of energy, but most of AI’s footprint comes from inference, the everyday use that powers chatbots, search engines, and predictive tools. It is estimated to use 80-90% of the computing power needed to support AI.

It’s not hard to see why a clear analysis of AI’s climate impacts is elusive, when all AI models require different quantities of computing power to train and operate. When OpenAI trained its large language model GPT-3, that work produced “the equivalent of around 500 tonnes of carbon dioxide,” less for simpler models, writes Scientific American. The next iteration of that model, GPT-4, required 50 times as much energy to train, according to published sources.

There are too many types of AI to produce a clean and easy metric for understanding energy and water use. Adding to this challenge is an almost total absence of industry disclosure of standardized energy consumption information, though efforts are under way to help make sense of what’s available. We know the tech sector is “aware that AI emissions put its climate commitments in danger” and that both Microsoft’s and Google’s net-zero progress has been derailed largely due to increased demands from AI. But to date, disclosures from industry have been selective and incomplete.

Even seemingly reliable measurements crumble under scrutiny. Take the claim that a single request to ChatGPT uses approximately 3 watt-hours of energy, around “10 times more than a Google search”. This truism was recently revealed to be less a precise measurement than an offhand comment traced back to a tech executive’s 2023 interview.

AI’s energy consumption often leads public discourse, but its water consumption is emerging as another major concern. Recent reporting from Bloomberg exposes the water pressures created by data centre growth in the United States, including that two-thirds of data centres built since 2022 are in places “gripped by high levels of water stress.” By one estimate, AI’s Scope 1 and 2 water withdrawals in 2027 will be equivalent to the annual water needs of Denmark. This type of pressure on water supply—largely generated by AI data centres—is igniting community pushback in places as far-ranging as Texas, Chile, Wisconsin, and Alberta, where water resources are a concern.

We don’t precisely know AI’s water and energy use, but we do know it presents communities with real-world challenges, leading to concerns not just about the impact of infrastructure like data centres, but also about how governments can responsibly use AI. The Urban Climate Leadership’s recent work with local government stakeholders has demonstrated to us that decision-makers want to harness AI’s potential, but struggle with the unknown implications for their own climate goals.

So what can local governments do to ensure that their climate goals and AI initiatives aren’t working at cross-purposes? One option may be to exert pressure on vendors, including large ones like Microsoft, to improve their transparency and disclosure about AI’s energy and water consumption as part of municipal procurement efforts.

Clear procurement standards could compel improved disclosure from the AI industry, which in turn would help decision-makers select the most energy- and water-frugal AI tools for their needs. For example, evidence suggests that slower AI responses save significant energy, but this type of customization is not being offered to users.

Local government leaders are also telling us they need to see AI firms put key principles in place: transparency, accessibility, quality data governance, and consideration of workers’ rights, and that they need support from other orders of government to drive intentional, informed choices about AI usage that benefit their employees and communities. Yet, the federal government’s recent 30-day sprint, a process designed to shape Canada’s AI future, makes no gestures toward environmental impacts or transparency, nor toward specific consideration of local government perspectives.

If Canada’s AI strategy is to be credible and sustainable, it must account for environmental transparency and support for local governments. Only then can AI’s progress align with our collective climate ambitions for a healthier, safer, and just future.

Cover photo:  Data centre screen grab from Google/YouTube

f