Climate change results in an increased probability of extreme weather events causing, among others, severe floods, landslides, and glacier collapses. Such natural hazards can cause significant casualties, put critical infrastructures at risk of failure, and decrease the quality of human lives. Near real-time mapping of natural hazards is emerging priority for the support of natural disaster relief, risk management, and informing governmental policy decisions. Advances in deep learning (DL) have resulted in a set of tools for earth monitoring. For instance, DL models have shown their advantages for land cover classification tasks and segmentation of floods. However, these approaches are designed for one specific task in one geographic region based on specific frequency bands of satellite data. Therefore, DL models used to map specific natural hazards struggle with their generalization to other types of natural hazards in unseen regions. Therefore, adapting DL models to, for example, unseen natural hazards requires additional data labeled. Data annotation takes a significant amount of time, prolongs model fine-tuning, and generates additional costs. In this work, we propose a methodology to significantly improve generalizability of DL natural hazards mappers based on pre-training on a suitable pre-task. Our approach supports the development of foundation models for earth monitoring such as with the objective of directly segmenting unseen natural hazards across unseen regions. Our contributions are as follows: First, we demonstrate across four U-Net architectures that our approach significantly improves the generalizability of DL models for the segmentation of unseen natural hazards. Second, we show that our approach generalizes over novel geographic regions and different frequency bands of satellite data without access to data from the target domain. Third, by leveraging characteristics of unlabeled images from the target domain that are publicly available, our approach is able to further improve the generalization behavior without fine-tuning.