Main Navigation

AI: Friend and foe to the environment

For the past few decades, we have lived in the “Digital Age.” Now, we are stepping into a new one—the Age of AI. While artificial intelligence (AI) has existed in primitive forms since the 1950s, it has now advanced to a readily accessible and ubiquitous state. Browser-based programs like ChatGPT are well-known, but they represent only a small portion of the scope, demands, capabilities and consequences of AI.

At its inaugural symposium last September at the University of Utah, the One-U Responsible AI Initiative invited over two hundred attendees, including researchers, university faculty, government officials, and industry leaders, to discuss the role and responsible usage of AI. Three key issues were addressed at the first panel of the symposium; the environmental impacts of AI, the dangers of AI-generated and -amplified misinformation, and the application of AI for wildfire forecasting, an issue that poses challenges for the West’s electrical grid.

Resilience and sustainability

A man stands at a podium.

William Anderegg, director of the Wilkes Center for Climate Science & Policy.

William Anderegg, director of the U-based Wilkes Center for Climate Science & Policy, is the executive committee member who leads the One-U RAI’s environmental working group. The group’s members bring their diverse expertise to establish ethical policy, explore AI’s impact on society and the environment, and develop responsible methods for using AI to improve climate research.

“Our goal of this working group is to put together a vision and a mission about responsibly developing and using AI to address human environmental challenges across scales to promote resilience and foster sustainable development,” said Anderegg. “AI could have an enormous negative impact on the environmentitself. There are direct impacts for the cost of running AI—the power and water needed to run the massive data centers, and the greenhouse gas emissions that result. Then there are indirect challenges—misinformation, polarization, and increasing demands on the power grid. At the same time, there are another set of opportunities in using AI to tackle the marginal problems in forecasting and grid rewarding systems.”

The working group’s vision is to utilize AI to bolster our resilience to climate change with collaboration, training, technology, and ethical governance.

“The University of Utah is set to engage in these two focal areas of developing sustainable AI—how we useAI in a manner that minimizes environmental impact and maximizes long-term sustainability? Then, how do we harness AI for environmental resilience challenges?” Anderegg noted.

AI and misinformation

A woman stands at a podium.

Isabelle Freiling, communications researcher and faculty affiliate at the U’s Global Change and Sustainability Center

Misinformation in the age of social media is complicated. AI-generated content makes it even more confusing. Isabelle Freiling, assistant professor of science communications and 2025 RAI Fellow, spoke about how AI could be used to both verify the accuracy of communications and to create and spread misinformation.

We need to consider the ecosystem in which (mis)information emerges and evolves, which is different from a lot of misinformation research that focuses on clearly false messages, she said.

“Should we use AI to fact check content? It is often harder to determine the veracity of content because it might contain many different claims. Some of them might be true, some of them might be false.”

Freiling and others are working to better understand misinformation in the real world so they can develop productive pathways for effective research. For example, fact checking content has good intentions, but repeating the false information, even to disavow it, might backfire.

“Repeating something false can make it stick in people’s minds,” Freiling added. She presented a scenario where the recipient of information (e.g., an article reader, a TV watcher, a radio listener, etc.) reads or hears two points—one is correct and one is false.

“You might’ve heard the misinformation twice, and the correct information only once. You will remember what you’ve heard repeated the most, but you might not remember that the information is false,” She continued. “Before we intervene against misinformation, we should think about whether the content is actually harmful to begin with in order to avoid catapulting benign claims into harmful ones through the intervention.”

AI for wildfire forecasting

A man stands at a podium.

Derek Mallia, research assistant professor of atmospheric sciences.

Researchers, including Derek Mallia, research assistant professor of atmospheric sciences, have also utilized AI to forecast wildfires and its hazardous smoke.

“When you think of extreme weather, you think of hurricanes, tornadoes and so on. But one of the biggest causes of mortality is actually poor air quality,” said Mallia. “Wildfires cause a degradation in air quality during the summer, and these effects are becoming more widespread. We’re not just seeing smoke across the western U.S., but also in areas that traditionally don’t see a lot of wildfire smoke—parts of NewYork, for example.”

It takes a massive amount of data to accurately predict how meteorological systems will behave, Mallia continued. AI could help researchers manage big datasets to more accurately track where and when wildfire smoke will pose a danger to human health.

“These trends are expected to continue as a result of climate change. Which is why we are trying to figure out how to use AI and machine learning for some of the forecasting applications,” he said.

MEDIA & PR CONTACTS

  • Lisa Potter Research communications specialist, University of Utah Communications
    949-533-7899