Flood Event Detection from Sentinel 1 and Sentinel 2 Data: Does Land Use Matter for Performance of U-Net based Flood Segmenters?
Abstract
Floods are among the most costly weather hazards for societies and businesses globally. With increasing global warming, these events have become even more frequent and more devastating. Thus, accurate flood mapping has become critical for disaster relief, risk management and mitigation. Current flood segmentation methods use either threshold-based approaches or deep-learning schemes, e.g. using the U-Net architecture, to differentiate between water-covered bodies or dry land on Earth observation images. Many schemes are exploiting imagery from synthetic aperture radar (e.g. Sentinel 1 satellites) or visual bands of satellites such as the Sentinel 2, but often restrict themselves to using one or very few modalities, i.e. spectral wavelengths, despite the availability of many more wavelengths or pre-processed indices with potential value to the problem. In support of operationalizing flood segmentation on a global scale using deep learning, we propose semantic flood segmentation exploiting optionally many different modalities (multimodal flood segmentation), making the approach largely immune to geographic differences across the globe. Using U-Net at the core of our work, we observe very good generalisation of our segmentation model to unseen flood events in our holdout set at the level of 0.95 F1 Score (0.92 IoU) for both no water and water class, and 0.53 F1 Score (0.43 IoU) for water class, respectively.