About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
International Journal of Applied Earth Observation and Geoinformation
Review
Self-supervised audiovisual representation learning for remote sensing data
Abstract
Many deep learning approaches make extensive use of backbone networks pretrained on large datasets like ImageNet, which are then fine-tuned. In remote sensing, the lack of comparable large annotated datasets and the diversity of sensing platforms impedes similar developments. In order to contribute towards the availability of pretrained backbone networks in remote sensing, we devise a self-supervised approach for pretraining deep neural networks. By exploiting the correspondence between co-located imagery and audio recordings, this is done completely label-free, without the need for manual annotation. For this purpose, we introduce the SoundingEarth dataset, which consists of co-located aerial imagery and crowd-sourced audio samples all around the world. Using this dataset, we then pretrain ResNet models to map samples from both modalities into a common embedding space, encouraging the models to understand key properties of a scene that influence both visual and auditory appearance. To validate the usefulness of the proposed approach, we evaluate the transfer learning performance of pretrained weights obtained against weights obtained through other means. By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pretraining strategies for remote sensing imagery. The dataset, code and pretrained model weights are available at https://github.com/khdlr/SoundingEarth.