Skip to content

Separating truth from fiction in AI

Did you see the US election images of Swifties for Trump? Hear Sir David Attenborough narrating the moment an ant falls into someone’s fish tank? Or the talkshow clip of Greta Thunberg – yes, her! – talking about vegan grenades? Unbelievable!
Image: Environmental messages can be embedded in AI-generated pictures

Environmental messages can be embedded in AI-generated pictures

Well, absolutely. All the above were generated by AI. Just a bit of harmless fun, right? 

Or not. A moment’s reflection will show that there are serious problems with these new, powerful online content- creation tools. Legal questions of identity theft, copyright and deception aside, it is becoming more and more difficult for people to discern what is real, trustworthy, dependable environmental information and what is not. In the upside-down world of improbable political alignments, media personalities sounding not- quite-themselves, and climate-change activists apparently spouting bizarre nonsense, what counts is engagement, not accuracy. The more outlandish, the more successfully it is spread through social media. As a result, research is urgently needed to understand the influence of such generative AI (GenAI) content on how people, particularly environmental decision-makers, think and act in the real world.

Writing in the Nature journal npj Climate Action, landscape ecologist Dr Dan Richards and economist David Worden outline a typology of the ways in which GenAI is currently being used to create and spread climate misinformation.

Three main ways to create misinformation stand out – impersonation of a real, often trusted, individual using text or voice- cloning; creation of complete synthetic identities of people who do not exist but are realistic enough to convince human decision-makers; and the rapid creation of vast volumes of content such as images and videos.

Amplification of this misinformation online occurs at all scales, from the individual to the society to the global, and is used to promote diverse perspectives including pro-environmental ones. Not all amplification has malicious intent, but it has the power and the potential to persuade (e.g. that climate change is a hoax), to coerce (e.g. to force votes for candidates with particular environmental views), to cause non- physical or even physical assault (e.g. by targeted harassment or incitement to violence), and to deceive (including to obtain information).

In the technical arms-race against unsanctioned AI use of material, initiatives are well underway to improve image watermarking and to develop methods to post “poisoned” (deliberately falsified) material that then appears in data scraped by AI. The content-related research gaps that need to be filled most urgently, say the researchers, are twofold: a systematic analysis is needed of the key actors who are using GenAI to influence real-world climate decision- making, and secondly it is important to understand how GenAI is contributing to the polarisation and radicalisation of opinions about climate change on both sides. Without such work, it will be increasingly difficult to separate environmental truth from fiction and make sound management decisions.

In the meantime, if you hear Sir David talking about your neighbour’s fish tank online, you can be fairly sure that it is a deepfake stolen voice clone: please do him a favour and don’t pass it on.

Key contact