Publication
AGU 2024
Poster

Exploring Different Types of Foundation Models on Flood Segmentation Datasets

Abstract

Semantic segmentation AI models play a critical role in tasks requiring pixel-level classification, such as coverage detection or disaster evaluation. However, these models often require preparing large amounts of training data and extensive model training. The latest foundation models can potentially enhance data efficiency to achieve better model accuracy with less training data. This paper explores various foundation models and training schemes for flood segmentation datasets in scenarios with limited training data available, including prompt tuning on visual-language models(VLM) and fine-tuning models specialized for geographical data. We benchmark the performance of different models with various tuning schemes. Our experimental results show the potential for accuracy improvement by foundation models to enhance flood segmentation tasks, especially with restricted data availability. Notably, prompt tuning on VLMs, with a similar number of learnable parameters as linear probing on conventional segmentation models, outperforms conventional models in all tested scenarios of data availability.

Date

Publication

AGU 2024

Authors

Topics

Share