Deep reinforcement learning for optimal firebreak placement in forest fire prevention

2026-03-04

Lucas Murray, Tatiana Castillo, Isaac Martín de Diego, Richard Weber, José Ramón González-Olabarria, Jordi García-Gonzalo, Andrés Weintraub, Jaime Carrasco-Barra,
Deep reinforcement learning for optimal firebreak placement in forest fire prevention,
Applied Soft Computing,
Volume 175,
2025,
113043,
ISSN 1568-4946,
https://doi.org/10.1016/j.asoc.2025.113043.
(https://www.sciencedirect.com/science/article/pii/S1568494625003540)
Abstract: The increasing frequency and intensity of large wildfires have become a significant natural hazard, requiring the development of advanced decision-support tools for resilient landscape design. Existing methods, such as Mixed Integer Programming and Stochastic Optimization, while effective, are computationally demanding. In this study, we introduce a novel Deep Reinforcement Learning (DRL) methodology to optimize the strategic placement of firebreaks across diverse landscapes. We employ Deep Q-Learning, Double Deep Q-Learning, and Dueling Double Deep Q-Learning, integrated with the Cell2Fire fire spread simulator and Convolutional Neural Networks. Our DRL agent successfully learns optimal firebreak locations, demonstrating superior performance compared to heuristics, especially after incorporating a pre-training loop. This improvement ranges between 1.59%–1.7% with respect to the heuristic, depending on the size of the instance, and 4.79%–6.81% when compared to a random solution. Our results highlight the potential of DRL for fire prevention, showing convergence with favorable results in cases as large as 40 × 40 cells. This study represents a pioneering application of reinforcement learning to fire prevention and landscape management.
Keywords: Artificial Intelligence; Fire prevention; Reinforcement learning; Wildfire management; Wildfire-resilient landscapes