AI-driven misinformation on climate change is a growing threat

AI tools like Bard and ChatGPT have been found to generate and spread climate change misinformation, raising concerns about their potential impact on public opinion.

Stella Levantesi reports for DeSmog.


In short:

  • Studies show that AI tools like Bard and ChatGPT can fabricate climate misinformation, making it harder to distinguish real science from fake.
  • AI-generated misinformation can be spread via synthetic media, social bots, and algorithms that tailor content based on users’ biases.
  • Researchers are developing AI tools to counter misinformation, but they face challenges such as “hallucinations” and the rapid pace of AI advancement.

Key quote:

“ ... researchers have suggested that AI is being used to emotionally profile audiences to optimize content for political gain.”

— Asheley R. Landrum, associate professor at the Walter Cronkite School of Journalism and Mass Communication and a senior global futures scientist at Arizona State University

Why this matters:

AI-generated climate misinformation threatens to undermine trust in science. Its ability to spread rapidly and persuasively, especially on social media, makes it a significant challenge for combating climate disinformation and influencing public policy.

Related: Fossil fuel industry spreads misinformation to hinder global shift to renewable energy

About the author(s):

EHN Curators
EHN Curators
Articles curated and summarized by the Environmental Health News' curation team. Some AI-based tools helped produce this text, with human oversight, fact checking and editing.

You Might Also Like

Recent

Top environmental health news from around the world.

Environmental Health News

Your support of EHN, a newsroom powered by Environmental Health Sciences, drives science into public discussions. When you support our work, you support impactful journalism. It all improves the health of our communities. Thank you!

donate