Jielin Xu, Wenqiang Song, et al.
Alzheimer's and Dementia
We present TerraMind, the first any-to-any generative, multimodal foundation model for Earth observation (EO). Unlike other multimodal models, TerraMind is pretrained on dual-scale representations combining both token-level and pixel-level data across modalities. On a token level, TerraMind encodes high-level contextual information to learn cross-modal relationships, while on a pixel level, TerraMind leverages fine-grained representations to capture critical spatial nuances. We pretrained TerraMind on nine geospatial modalities of a global, large-scale dataset. In this paper, we demonstrate that (i) TerraMind's dual-scale early fusion approach unlocks a range of zero-shot and few-shot applications for Earth observation, (ii) TerraMind introduces "thinking in modalities" (TiM)---the capability of generating additional artificial data during finetuning and inference to improve the model output---and (iii) TerraMind achieves beyond state-of-the-art performance in community-standard benchmarks for EO like PANGAEA. The pretraining dataset, the model weights, and our code will be open-sourced under a permissive license.
Jielin Xu, Wenqiang Song, et al.
Alzheimer's and Dementia
Paul Gond-Charton, Sebastien Gouin, et al.
ECTC 2023
Anthony Praino, Lloyd Treinish, et al.
AGU 2024
Romeo Kienzler, Johannes Schmude, et al.
Big Data 2023