Publication
IEEE GRSL
Paper

Flood Mapping Using Sentinel-1 Images and Lightweight U-Nets Trained on Synthesized Events

View publication

Abstract

Floods cause loss of lives and multibillion dollar damages every year. When these events strike, automated tools using remote sensing for quick mapping of the affected areas are critical for planning rescue activities and assessing impact. The most recent techniques to map floods are based on semantic segmentation and deep learning, which require large datasets and ground truth for training, which are difficult to get. To overcome this challenge, we propose an effective method for synthesizing patches of synthetic aperture radar (SAR) images containing open-land flooded areas. With bitemporal image acquisitions, we replace portions of land areas in the second acquisition with water pixels borrowed from permanent water bodies. Spatial patterns derived from elevation data help guide the process. With this approach, we build a large dataset with pre- and post-event Sentinel-1-based VH-polarized radar intensity images to train deep neural networks to map real-life floods. In a case study, we use an established U-Net architecture and show that a model version with less than 2% of the number of original parameters achieves almost identical flood detection accuracy. This allows for faster processing and is a clear advantage from an operational perspective. For comparison, we provide empirical segmentation results of four flood cases. F1 scores agree between 0.80 and 0.90 when compared with reference flood maps from the Copernicus service. This confirms that the approach is valuable for mapping areas affected by floods, e.g., during or immediately after catastrophic events, even in areas that were not included during model training.