Sustainable AI: Doing AI Without Blowing Up ESG Targets
AI is moving into production just as its energy and water footprint explodes. This piece uses the “green‑in” and “green‑by” AI framing to ask how we can get real value from AI without blowing up climate and water targets.
AI has moved from pilot to production just as its physical footprint is exploding. Recent analyses from the International Energy Agency suggest global data‑center electricity demand could more than double to around 945 TWh by 2030, largely driven by AI workloads, roughly the current annual consumption of Japan. At the same time, training and serving large models is turning water into a hard constraint: a single hyperscale facility can use hundreds of thousands of gallons of water per day for cooling, with some sites approaching several million gallons, equivalent to a small city. Training one frontier‑scale model alone has been estimated to consume on the order of hundreds of thousands of litres of freshwater. Against this backdrop, AI is still marketed as a pure value engine.
For ESG, procurement, and technology leaders, the question is unavoidable: is AI‑driven value worth the climate and water impact if we do not change how we build and run it? This piece looks through two lenses: green‑in AI (making AI itself cleaner) and green‑by AI (using AI to hit sustainability goals). Recent work on green artificial intelligence formalises this distinction, using “green‑in” to describe efforts to reduce the footprint of AI systems themselves and “green‑by” for applications that use AI to advance sustainability in other domains.
I write this as someone who sits between AI research and real systems: I studied artificial intelligence, worked on ontologies and knowledge graphs, and now spend a lot of time looking at how AI actually runs inside data centres, financial infrastructures and smart buildings. From that vantage point, it is hard not to notice how quickly AI’s physical footprint is growing, and how slowly our governance and ESG practices are catching up.
Why AI is now a material environmental issue
Energy and carbon
Training and operating large models already consumes substantial electricity, and AI servers are becoming one of the fastest‑growing loads in new data centers. Recent projections indicate that data‑center electricity demand could rise to around 945 TWh by 2030, with AI a primary driver. In 2024, data centers were already responsible for a meaningful share of global power demand, and in markets like the US their share is projected to climb into the mid‑single digits. The impact of AI on emissions depends heavily on grid mix and facility efficiency: siting workloads on carbon‑intensive grids or in inefficient sites locks AI into high‑emissions trajectories, while efficient, renewables‑heavy facilities can materially reduce the footprint per query.
Stepping back, this raises a deeper question that will feel familiar to anyone who has worked with ontologies or knowledge graphs: what kind of “thing” is AI infrastructure in an ESG sense? Is it just another IT line item folded into Scope 2 and 3, or is it a distinct, fast‑growing category of physical infrastructure with its own energy and water dynamics that deserves explicit treatment?
To make this less abstract, a newer AI‑focused hyperscale data centre can draw as much power as 100,000 homes, and some large facilities can consume up to 5 million gallons of water per day, equivalent to the needs of a town of 10,000 to 50,000 people. Case studies in hot, arid regions also show that seemingly small design choices around cooling can create major water–energy trade‑offs, turning siting and technology decisions into genuine environmental risk management problems rather than purely technical ones.
Water
Cooling and power generation make AI a water story as much as an energy one. Studies of large‑scale training runs have shown that a single frontier model can evaporate hundreds of thousands of litres of freshwater, while a typical hyperscale data center can use hundreds of thousands to several million gallons a day, depending on technology and climate. Direct on‑site use for cooling combines with off‑site use in electricity generation, and both are increasingly contentious in already water‑stressed regions such as the US Southwest or rapidly growing Asian metros. This is prompting regulatory scrutiny: in the Western US, for example, new laws and proposals now require data centers to disclose expected and actual water consumption as a condition of operation, signalling that tracking and eventually capping water use is firmly on the policy agenda.
Embodied carbon in hardware
The environmental footprint of AI is not only about kilowatt‑hours and cooling towers. Specialised accelerators such as GPUs and TPUs carry significant embodied carbon from mining and refining raw materials through chip fabrication, assembly and logistics. Life‑cycle assessments of AI hardware show that manufacturing can represent a substantial fraction of total cradle‑to‑grave emissions. When fleets are over‑provisioned, under‑utilised or refreshed aggressively, much of this embodied carbon is effectively wasted, because the emissions are amortised over far fewer inferences or training runs than they could be. From an ESG standpoint, that makes hardware‑level efficiency and thoughtful refresh cycles as important as operational energy efficiency.
Regulation and ESG relevance
Regulation is rapidly catching up, making AI’s environmental impact a board‑level concern. In Europe, the recast Energy Efficiency Directive introduces mandatory facility‑level reporting for larger data centers on energy use, performance and increasingly water, with data feeding into national systems and dovetailing with CSRD and the EU Taxonomy for sustainable activities. In parallel, emerging AI‑specific rules and voluntary codes of conduct link model deployment to energy‑efficiency and transparency requirements. Across the Atlantic, state‑level initiatives now obligate data‑center operators to track and certify water use, signalling that North American regulators are heading in the same direction. With AI embedded in everything from customer support to public‑sector programmes, these developments end the era in which AI infrastructure could sit outside ESG risk and reporting frameworks.
“Green‑in AI”: reducing AI’s own footprint
The first lens is green‑in AI: making the AI itself cleaner – models, infrastructure and hardware. In practice, this means treating architecture and deployment decisions as ESG decisions, not just engineering ones.
Model and workload efficiency
The first place to act is at the model and workload level. Many organisations still default to giant frontier models for almost every use case, even when a smaller, fine‑tuned or distilled model would deliver the same or better business outcome. That choice drives up energy use in both training and inference. Right‑sizing is therefore a foundational lever: systematically mapping use cases to the smallest model that meets performance requirements, preferring task‑specific or compact models where possible and reserving frontier models for genuinely frontier tasks.
Compression and optimisation techniques can push this further, for example:
- Quantisation to reduce the precision of weights and activations
- Pruning to remove redundant parameters
- More efficient architectures to cut the number of operations per query
Together, these reduce compute and energy per inference without necessarily sacrificing accuracy. At scale, such reductions compound across millions or billions of calls.
In practice, this forces uncomfortable trade‑offs. How much accuracy is enough to justify moving from a compact model to a frontier one? When does “just in case” access to a bigger model stop being a product choice and start being an ESG liability?
Finally, workload optimisation focuses on when and where compute happens. Caching repeated queries, batching asynchronous jobs and scheduling heavy training into time windows and regions with cleaner or cheaper electricity can materially lower both emissions and cost, especially when coordinated with providers that expose granular carbon and water data.
Infrastructure and data‑center choices
Beyond the model, the infrastructure choices behind AI matter. For carbon‑aware deployment, buyers should prioritise providers that can demonstrate high shares of renewable energy, publish core efficiency and impact metrics such as PUE, WUE and location‑based carbon intensity, and back these with credible net‑zero roadmaps rather than generic green claims. Without this transparency, ESG teams cannot accurately attribute AI’s contribution to Scope 2 and relevant Scope 3 emissions, and procurement cannot compare options on a like‑for‑like basis.
Cooling and water strategy are equally material. Different cooling technologies (air, evaporative, direct‑to‑chip liquid or full immersion) have very different energy and water profiles. Some new AI‑optimised data‑center designs aim for zero freshwater for cooling, relying on closed‑loop liquid systems and reclaimed or non‑potable water. As mandatory water reporting expands in the EU and local communities in water‑stressed regions push back against new facilities, water must be treated as a first‑order design constraint, not an afterthought.
Recent analyses also show that combining smarter siting, faster grid decarbonisation and high‑efficiency cooling can cut worst‑case AI data‑center emissions by around 70% and water use by more than 30% compared to business‑as‑usual build‑outs. The gap between those trajectories is exactly where governance, procurement and technical design decisions matter.
Hardware and lifecycle
The third lever is hardware lifecycle management. Maximising utilisation and extending the lifespan of accelerators reduces the effective footprint per unit of compute delivered, because embodied manufacturing emissions are spread over more useful work. Idle capacity and over‑provisioned fleets, by contrast, lock in emissions without corresponding value.
Hyperscale and cloud infrastructure can be more efficient than fragmented on‑prem deployments, thanks to higher average utilisation and more advanced thermal and power management, but only if operators actually deliver and disclose real performance, not just promise it in marketing materials. For ESG and procurement, a simple practical test follows: if a provider cannot tell you the energy, water and emissions implications of your AI workloads in terms that map to your reporting frameworks, that is a red flag.
“Green‑by AI”: using AI to hit sustainability goals
Once the direct footprint is addressed, the second lens is how AI can accelerate progress toward environmental goals, provided benefits are real, measured and not erased by rebound effects.
Asset and infrastructure efficiency
AI is already improving asset performance and extending equipment life. Predictive‑maintenance systems use sensor data and machine‑learning models to anticipate failures, schedule interventions and tune operations. In sectors like renewable energy, such systems have reduced downtime and maintenance costs while increasing energy output, effectively delivering more clean electricity per unit of installed capacity. Applied to industrial equipment, buildings or grid assets, the same approach can reduce unplanned outages, avoid catastrophic failures that require full replacement and cut the energy and materials associated with emergency repairs.
Demand and supply optimisation
Forecasting is another high‑impact domain. AI‑based demand and supply models can reduce overproduction, inventory write‑offs and logistics emissions in industries such as energy, manufacturing and retail. Better forecasts enable plants and fleets to run closer to true demand with fewer buffers and less safety stock, directly lowering waste.
There is also a reflexive benefit for AI infrastructure itself: knowing when large training runs or peak inference loads will occur allows operators to pre‑position renewable power, adjust cooling strategies and coordinate with grid operators, reducing both cost and emissions intensity.
Smarter logistics and resource use
Finally, AI can support more efficient movement and use of resources. Route‑optimisation and load‑planning tools improve vehicle utilisation and reduce fuel consumption, while smart‑building controls continuously adjust lighting, HVAC and water systems to match occupancy and conditions.
These only qualify as green‑by‑AI if they are tied to explicit environmental KPIs and monitored over time, for example:
- Kilowatt‑hours saved
- Tonnes of CO₂e avoided
- Cubic metres of water conserved
As with defining “Team Enstone” across eras, the hard part is not just the optimisation itself but deciding what to optimise for. Are we trying to minimise emissions per unit of revenue, per shipment, or in absolute terms across a fleet or portfolio?
Efficiency also brings risk: as processes become cheaper and faster, total activity can increase and offset gains. ESG teams should therefore watch for rebounds, ensuring that efficiency improvements are accompanied by absolute targets or caps where climate and water constraints are binding.
ESG and procurement: making sustainable AI non‑negotiable
To move from theory to practice, ESG and procurement need to be decision‑makers in AI, not spectators.
On governance, ESG, IT, security and legal should jointly own AI strategy, because AI now affects emissions, water use and broader environmental risk as much as it does revenue. That means explicitly linking AI programmes to Scope 2 and relevant Scope 3 categories in existing greenhouse‑gas inventories, as well as emerging water metrics where regulators and investors are starting to focus.
Procurement is the enforcement arm. Sustainability clauses should become standard in AI and cloud RFPs, covering lifecycle emissions data for hardware and facilities, regular energy and water reporting at a useful level of granularity, reduction roadmaps and independent assurance where feasible. An “AI lifecycle” view, analogous to a software development lifecycle, helps here: environmental performance should be considered at design, training, deployment and retirement, not bolted on post‑hoc.
On reporting and transparency, EU rules now require standardised facility‑level data‑center disclosures that feed into national systems and ESG frameworks. Organisations that cannot trace the energy and water use associated with their AI workloads will increasingly struggle to produce credible ESG reports or meet sustainable‑finance requirements. Emerging state‑level moves, such as US laws mandating data‑center water‑use tracking, show that this is not just a European phenomenon.
Finally, there is brand and stakeholder trust. If customers, employees and communities perceive AI as a black box that drains scarce water or drives emissions without disclosure, reputational risk rises quickly. In the near future, sustainable AI will be a default procurement requirement and a baseline brand expectation, not an optional add‑on. Organisations that learn to ask hard questions, and are willing to walk away from opaque offerings, will be best positioned as regulation and stakeholder scrutiny tighten. In that sense, sustainable AI is not a side constraint on innovation; it is the architecture that will determine which forms of AI can survive in a resource‑constrained world.