Thought Piece · Sustainability & AI
Sustainability and AI: The Three Frames
After two decades working in sustainability, I've learned to be wary of any technology that arrives trailing promises of transformation. And yet here we are, with artificial intelligence demanding our attention in ways that feel genuinely different because the applications feel real, the risks are real, and the implications for sustainability professionals are immediate.
As many of you will know, my AI & digital skills are sorely lacking, and certainly around AI and sustainability, I was quite confused. The questions I was being asked and the debates I was having always felt disconnected.
I’ve noticed that we're not all talking about the same thing. When sustainability leaders, colleagues, my wider network, even friends and family discuss AI, they're often looking at it through different lenses. Some are focused on, say the sustainability impact of data centres. Others are focused about what AI can do for their workload. A smaller group are imagining what AI might mean for solving the problems that we face, faster.
To help my understanding, I found the most effective approach was to look at this space in three frames or buckets. Frame 1 is the external impact of AI, Frame 2 is the efficiency AI can bring to roles and departments, and Frame 3 is how AI can be a tool for solving the harder, more systemic problems.
All three frames matter and all three are happening at once, which is why I think it helps to separate them clearly before trying to connect them.
Frame 1: The External ESG Impact of AI Itself
The first frame is the one that should make us most uncomfortable. AI is not a neutral tool. It has a material environmental footprint, and as sustainability professionals, we cannot look away from that simply because the technology is useful to us.
The energy consumption of large language models and the infrastructure behind them is significant and growing. To put this into some context, Training GPT-3 consumed approximately 1,287 megawatt hours of electricity and produced around 552 metric tons of CO₂ equivalent, which is comparable to 1,600 return flights from Paris to New York (University of California, Berkeley 2021). GPT-4's training energy demand is estimated to have been roughly 40 times higher (Ludvigsen, 2023). Data centres currently account for around 1.5% of global electricity consumption, and the IEA projects that figure will nearly double to 945 TWh by 2030, equivalent to Japan's entire current electricity demand (IEA, Energy and AI, 2025). Water use for cooling is another material impact that is only beginning to receive the scrutiny it deserves, with MIT estimating that data centres consume approximately two litres of water per kilowatt hour of energy used.
There are also profound social and governance dimensions to contend with. Questions about algorithmic bias, labour practices in the AI supply chain, data privacy, and the concentration of AI capability in a small number of corporate hands are all material ESG risks. For organisations that are building AI governance frameworks or publishing responsible AI commitments, the credibility of those commitments will increasingly be tested by the detail behind them.
None of this is an argument against using AI. It is an argument for approaching it with the same rigour we apply to any other material impact.
Frame 2: AI as an Efficiency Engine for Sustainability Teams
The second frame is closer to home, and for most sustainability professionals, it's the one with the most immediate practical relevance. AI can make sustainability teams dramatically more effective, not by replacing their expertise, but by removing the friction that prevents that expertise from being used well.
The administrative burden on sustainability functions has grown enormously in recent years. Regulatory demands i.e. CSRD, ISSB and a cascade of national frameworks, have created a disclosure environment that can feel like it consumes every available hour. Meanwhile, internal teams are expected to analyse supplier data, track scope 3 emissions, respond to investor questionnaires, and maintain alignment with standards that are themselves evolving in real time.
This is precisely where AI can deliver. The ability to rapidly analyse sustainability reports, map regulatory gaps, synthesise frameworks, and draft disclosure language represents a genuine shift in what small teams can accomplish. AI doesn't make the judgement calls, the sustainability professional still does, but it dramatically reduces the time spent on tasks that were previously manual and time-consuming.
This is the territory that Summit Sustainability focuses on in our Sustainability and AI practice. Our work, combining deep sustainability expertise with practical AI knowledge is built on a recognition that the problem isn't access to AI tools, it's knowing which tools apply to which problems, and how to use them without introducing new risks. Our training programmes help sustainability teams build that capability rather than outsource it. Our AI-enhanced support for reporting and disclosures, in partnership with Martin, an AI-powered ESG intelligence platform, address the specific bottlenecks where teams are losing the most time: regulatory gap analysis, sustainability report benchmarking, and strategic recommendations grounded in current disclosures.
If AI can absorb a meaningful portion of the reporting burden, sustainability professionals can spend more of their time on what actually requires their expertise: stakeholder engagement, strategic influence, and the harder work of embedding sustainability into how organisations make decisions.
Frame 3: AI as a Tool for Solving Harder Problems
The third frame is the one that requires the most honesty, because it's where the promises are biggest and, from my perspective, the evidence is thinnest, but also where the stakes are highest.
Some of the most important sustainability challenges involve huge complexity, e.g. Climate modelling, biodiversity monitoring at scale, optimising energy grids for renewable intermittency, predicting crop failures months in advance, identifying deforestation in satellite imagery faster than enforcement agencies can respond, these are problems where AI should and will be a a huge asset.
At the systems level, AI's ability to process and integrate vast, datasets offers something that sustainability has long struggled with. Supply chain resilience, ecological tipping points, social inequality and climate exposure are not separate problems, AI tools that can model their interconnections may help us understand, what interventions are working, which has negative knock-on effects, and which are not working.
AI will not solve the climate crisis. It will not replace the policy frameworks, the capital reallocation, the behaviour change, and the political will that real progress requires. What it might do if deployed thoughtfully and governed well is help us use the tools we already have more effectively, and identify leverage points that were previously invisible.
For sustainability professionals, the practical question is where to start focusing. The most credible applications are in areas where AI is augmenting human expertise with faster, better-evidenced analysis, not replacing it with automated conclusions. That distinction matters enormously and is where we at Summit Sustainability operate.
Holding All Three Frames Simultaneously
The mistake would be to look through only one of these frames. Organisations that focus exclusively on AI's efficiency benefits without interrogating its ESG impact will find themselves exposed to scrutiny, to greenwashing risk, and to the basic inconsistency of using a high-carbon technology to support their sustainability journey, without acknowledging the tension. Equally, organisations paralysed by AI's risks will cede ground to competitors who are using it responsibly to do better work faster.
The sustainability professionals best placed to navigate this moment are those who recognise AI's footprint while being genuinely curious about its applications; who can ask hard questions about responsible AI governance while actively building the skills to use AI well.
Summit's focus on helping sustainability teams apply AI where it actually matters reflects a sensible starting point for most organisations: not a wholesale AI transformation, but a deliberate, expertise-led integration that builds capability, protects integrity, and frees up the time that sustainability functions need to do their most important work.
The conversation about AI and sustainability is not going to get simpler. But with the right frames, for me at least it becomes a great deal clearer.
References
IEA (2025). Energy and AI. International Energy Agency. https://www.iea.org/reports/energy-and-ai
Patterson, D. et al. (2021). Carbon Considerations for Large Language Model Training. University of California, Berkeley / Google.
Ludvigsen, K.G.A. (2023). The Carbon Footprint of GPT-4. Towards Data Science.
MIT News (2025). Explained: Generative AI’s Environmental Impact. Massachusetts Institute of Technology. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Carbon Brief (2025). AI: Five charts that put data-centre energy use – and emissions – into context. https://www.carbonbrief.org/ai-five-charts-that-put-data-centre-energy-use-and-emissions-into-context/