Tech companies paint a rosy picture about how AI can combat climate change. But its threats to progress in the climate fight are real and immediate.

The budding technology has the potential to make disinformation faster, cheaper, and more viral than ever. And in two years, AI data centers may consume as much energy annually as Japan, all while slurping up billions of liters of fresh water, often in areas already facing shortages.

These are some of the dangers we highlight in a new report, The AI Threats to Climate Change, published alongside Kairos, Global Action Plan, Greenpeace, and Friends of the Earth as part of the Climate Action Against Disinformation coalition.

Barring a policy intervention, AI is set to massively contribute to the climate crisis through its energy use — while simultaneously supercharging believable falsehoods from people profiting off climate disinformation.

“We are already seeing how Generative AI is being weaponized to spin up climate disinformation or copy legitimate news sites to siphon off advertising revenue, said Sarah Kay Wiley, director of policy at Check My Ads.

AI is a disinformation doozy

Fossil fuel companies and climate deniers are no strangers to disinformation. For decades, they’ve engaged in campaigns to muddy the public’s understanding of the climate crisis and tip the scales of support away from potential solutions.

Now, thanks to AI, it’s easier, cheaper, and faster than ever to whip up realistic lies.

Companies have embraced AI, rushing to train their large language models (LLM) and rapidly integrating AI technology into social media algorithms, search results, and advertising systems. In the process, they’ve created a monster.

AI-based social media algorithms have been found to prioritize inflammatory content like climate denial. LLMs like ChatGPT can be intentionally trained to misinform the public, and their AI-generated answers can be plagiarized or entirely fabricated. Disinformers can use AI to micro-target ads towards highly specific and vulnerable groups — all while monetizing their misleading content through programmatic ad revenue. AI-written content has been pushing down legitimate websites (which it often plagiarizes from) in Google rankings.

The issue has gotten so serious that, after conducting a survey of about 1,500 experts across several sectors, the World Economic Forum found AI-generated mis- and disinformation as the biggest short-term risk to the world.

Are lawmakers or companies doing anything about this?

The United States hasn’t passed any comprehensive regulation on AI, although some states have introduced legislation to regulate AI-generated nonconsensual pornography and disinformation, and the Biden-Harris administration has taken a stab at AI safety through an executive order.

An ocean away, the European Union is working to introduce the AI Act this year. This will set risk categories for AI harms, labeling requirements, and steep fines.

Big tech companies themselves have also announced voluntary commitments around AI — but there are no mechanisms in the U.S. to enforce them.

It’s clear in our report: More must be done to rein in AI’s disruptive climate impact.

“Adtech companies are woefully unprepared to deal with Generative AI and the opaque nature of the digital advertising industry means advertisers are not in control of where their ad dollars are going,” Wiley said. “Regulation is needed to help build transparency and accountability to ensure advertisers are able to decide whether to support AI generated content.”

That’s why we lay out recommendations for government, tech companies, and regulators to take concrete action on these issues. AI development needs to keep three principles front of mind: transparency, safety, and accountability.

You can read the full report — and all of the recommendations below or at this link.