How AI can help (or hinder) the SDGs – State of the Planet

A photo of a robot holding flowers.
Credit: Chris Wang/uninstall

The United Nations Sustainable Development Goals (SDGs) were established in 2015 to address some of the world’s most pressing issues. The 17 goals They include 169 objectives such as eradicating poverty, reducing inequality and acting on the climate, in the service of peace, prosperity and environmental sustainability.

Since 2015, the use of AI has grown exponentially, while progress towards most goals has stopped or even reversed, particularly on poverty, hunger and climate resilience. AI offers new possibilities to address some of the SDGs, as well as to further limit and amplify the problems that the SDGs were created to address (for example, the growing energy needs of large data centers). The question is no longer whether AI matters for sustainable development, but how to apply it in ways that reduce costs, expand access, improve decision-making, and prevent further inequality.

In this sense, it is important to consider the intersection of AI and the SDGs.

One promising frontier lies in agricultural livelihoods. AI assistants deployed in multiple local languages ​​now answer millions of questions from farmers every year. Three dynamics are responsible for its popularity. First, cost compression: the applicability of the model is expanding, the infrastructure is maturing, and the means of implementation are becoming cheaper. Second, usability: voice and image inputs allow people to contribute without having to type or have a high level of literacy, overcoming a historical barrier to digital participation. Third, contextual applicability: Systems can combine a farmer’s question with real-time weather, market price, and local knowledge to provide dynamic, context-specific guidance rather than static advisories. These features head on to address the SDGs on food security, fair work and climate resilience.

Information is evolving at a rapid pace due to environmental change, so conventional data sets become obsolete very quickly. Most languages ​​and contexts central to the SDGs remain largely underserved in training data. Closing that gap will require multilingual, community-driven “gold” data sets to anchor models in needs on the ground and reduce systematic error for marginalized communities. No less important is the implementation design. Most benefits come when the tools encounter real users, not in training idealized models. The practical solution is responsible deployment, not constant delay for the sake of perfection.

Governance and infrastructure can control whether AI reduces or increases socioeconomic gaps and can help us establish new ethical paradigms, learning from spiritual leaders and indigenous peoples. Ethical principles and voluntary codes help, but regulatory clarity and reliable funding are what ensure that principles become facts. Ensuring basic digital access (devices, connectivity, and computing power) as a civil right would recognize that bandwidth shortages and hardware costs systematically exclude many communities from the potential advantages arising from AI rather than just the disadvantages. Public investment in socially disadvantaged groups is not a handout, but rather a necessary fix that enables participation in data co-creation, service co-design, and governance. Education is essential: As climate challenges and AI destabilize existing social structures, the absence of inclusive digital literacy will widen educational and socioeconomic fault lines.

The most important work remains ethical: promising fairness in funding, governance, driving and measuring AI.

Community-led AI is a necessity and a necessary cultural approach for a trust-building strategy. Models built in one city or risk regime are rarely transferable to another; Hyperlocal information co-produced with local citizens is required for flood warnings, heat risk maps and effective service targeting, as well as to protect those who are outsourcing the cost to large corporations through health, environmental and social issues. Low-code geospatial building blocks can make it possible for non-experts to combine satellite imagery, sensor feeds, and scenario tools, turning passive receivers into co-analysts. Trust grows organically when communities shape the questions, take ownership of parts of the process, and see results tied to tangible improvements rather than data-extractive practices. Co-creation and empathy are necessary ingredients for the change we need. This approach aligns with the SDGs on sustainable cities, health and reducing inequalities, while building the civic capacity necessary for long-term adaptation.

And, of course, no assessment is complete without confronting AI’s energy appetite. Training and operating large models requires a lot of energy, and the co-climate benefits are offset by additional emissions and strain on the grid. If computing power is the new bottleneck, digital equity conflicts with energy justice: socially vulnerable people have no way to get affordable, consistent energy and high-speed Internet, so they can’t build or even operate models. Improving efficiency, choosing a fit-for-purpose model and smart scheduling will help, but the development agenda needs to do more.

And, to repeat a question I was recently asked: Does ethical AI exist? I’m not sure there is an answer. Local microgrids, acquisition of clean energy data centers, and public policies that prohibit the concentration of computing centers in ways that recreate the resource inequalities of the past are needed. Surely, the second round of the SDGs must ensure that the AI ​​dividend is not paid at the expense of the climate deficit.

Global institutions are effective in setting standards, but they generally lack binding power. Cities, regional associations and public-private partnerships can act more quickly, if contracting generates openness; whether data sharing agreements protect rights while enabling research; and whether assessment methods are portable across borders but responsive to local language, law and culture.

Looking ahead to 2030 and beyond, the choice is not between AI as a solution or AI as a threat. AI will evolve whether we want it or not. However, today we can make decisions that will shape the AI ​​infrastructure for decades to come. AI can update outdated targets with more recent signals, emphasize neglected targets and unexpected trade-offs, and enable retrospective analyzes to reveal which interventions really have impact. Techniques that expose “why” something happened and what parameters drove that change are modern accountability tools, and are no longer just technical innovations in AI. But let me be clear: the human component must remain the most important aspect of AI. Nothing can replace human judgment, political will and social trust with an online search. The most important work remains ethical: promising fairness in funding, governance, driving and measuring AI.

Applied in this sense, AI can expand what is feasible and knowable for sustainable development, accelerating what has stagnated and illuminating previously unexplored routes. But it will only help the SDGs if it is designed with, for and by the people whose lives it will transform, and if it is driven in a way that the world can afford.


This article arises from a side event of the United Nations General Assembly, “Honest discussions at the intersection of AI and the SDGs,” co-organized by human intelligence, technology room and Compilerhosted at the Doris Duke Foundation on September 16, 2025.

The views and opinions expressed here are those of the authors and do not necessarily reflect the official position of the Columbia Climate School, the Earth Institute, or Columbia University.

#hinder #SDGs #State #Planet

Leave a Reply

Your email address will not be published. Required fields are marked *