Details zur Publikation |
Kategorie | Textpublikation |
Referenztyp | Zeitschriften |
DOI | 10.1175/AIES-D-24-0033.1 |
Lizenz ![]() |
|
Titel (primär) | Validating deep-learning weather forecast models on recent high-impact extreme events |
Autor | Pasche, O.C.; Wider, J.; Zhang, Z.; Zscheischler, J.
![]() |
Quelle | Artificial Intelligence for the Earth Systems (AIES) |
Erscheinungsjahr | 2025 |
Department | CER |
Band/Volume | 4 |
Seite von | e240033 |
Sprache | englisch |
Topic | T5 Future Landscapes |
Abstract | The forecast accuracy of machine learning (ML) weather prediction models is improving rapidly, leading many to speak of a “second revolution in weather forecasting”. With numerous methods being developed, and limited physical guarantees offered by ML models, there is a critical need for comprehensive evaluation of these emerging techniques. While this need has been partly fulfilled by benchmark datasets, they provide little information on rare and impactful extreme events, or on compound impact metrics, for which model accuracy might degrade due to misrepresented dependencies between variables. To address these issues, we compare ML weather prediction models (GraphCast, PanguWeather, FourCastNet) and ECMWF’s high-resolution forecast (HRES) system in three case studies: the 2021 Pacific Northwest heatwave, the 2023 South Asian humid heatwave, and the North American winter storm in 2021. We find that ML weather prediction models locally achieve similar accuracy to HRES on the record-shattering Pacific Northwest heatwave, but under-perform when aggregated over space and time. However, they forecast the compound winter storm substantially better. We also highlight structural differences in how the errors of HRES and the ML models build up to that event. The ML forecasts lack important variables for a detailed assessment of the health risks of the 2023 humid heatwave. Using a possible substitute variable, prediction errors show spatial patterns with the highest danger levels over Bangladesh being underestimated by the ML models. Generally, case-study-driven, impact-centric evaluation can complement existing research, increase public trust, and aid in developing reliable ML weather prediction models. |
dauerhafte UFZ-Verlinkung | https://www.ufz.de/index.php?en=20939&ufzPublicationIdentifier=30036 |
Pasche, O.C., Wider, J., Zhang, Z., Zscheischler, J., Engelke, S. (2025): Validating deep-learning weather forecast models on recent high-impact extreme events Artificial Intelligence for the Earth Systems (AIES) 4 , e240033 10.1175/AIES-D-24-0033.1 |