From Embers to Algorithms: How DigitalPath’s AI is Revolutionizing Wildfire Detection

In a world where wildfires are hotter news than ever, DigitalPath's system architect explains that sometimes, the best way to fight fire is with algorithms.
by Kristen Yee

DigitalPath is igniting change in the Golden State — using computer vision, generative adversarial networks and a network of thousands of cameras to detect signs of fire in real time.

In the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with DigitalPath System Architect Ethan Higgins about the company’s role in the ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego.

DigitalPath built computer vision models to process images collected from network cameras — anywhere from 8 million to 16 million a day — intelligently identifying signs of fire like smoke.

“One of the things we realized early on, though, is that it’s not necessarily a problem about just detecting a fire in a picture,” Higgins said. “It’s a process of making a manageable amount of data to handle.”

That’s because, he explained, it’s unlikely that humans will be entirely out of the loop in the detection process for the foreseeable future.

The company uses various AI algorithms to classify images based on whether they should be reviewed or acted upon — if so, an alert is sent out to a CAL FIRE command center.

One of the downsides to using computer vision to detect wildfires is that extinguishing more fires means a greater buildup of natural fuel and the potential for larger wildfires in the long term. DigitalPath and UCSD are exploring the use of high-resolution lidar data to identify where those fuels can be released in the form of prescribed burns.

Looking ahead, Higgins foresees the field tapping generative AI to accelerate new simulation tools and using AI models to analyze the output of other models to doubly improve wildfire prediction and detection.

“AI is not perfect, but when you couple multiple models together, it can get really close,” he said.

Explore generative AI sessions and experiences at NVIDIA GTC, the global conference on AI and accelerated computing, running March 18-21 in San Jose, Calif., and online.

You Might Also Like

Driver’s Ed: How Waabi Uses AI Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience—the road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast, Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.