Publication
SCML 2024
Conference paper

Using Neural Implicit Flow to represent latent dynamics of canonical systems

Download paper

Abstract

The recently introduced class of architectures known as Neural Operators has emerged as highly versatile tools applicable to a wide range of tasks in the field of Scientific Machine Learning (SciML), including data representation and forecasting. In this study, we investigate the capabilities of Neural Implicit Flow (NIF), a recently developed mesh-agnostic neural operator, to represent the latent dynamics of canonical systems such as the Kuramoto-Sivashinsky (KS) and forced Korteweg–de Vries (fKdV) equations and to extract dynamically relevant information from them. Finally we assess the applicability of NIF as a dimensionality reduction algorithm and conduct a comparative analysis with another widely recognized family of neural operators, known as Deep Operator Networks (DeepONets).