The knowledge graph completion task has gained a lot of attention in recent years, especially with the use of machine learning (ML). However, most of the work has focused on the structure of the graph while ignoring the data it describes. In this demo, we present an approach that does the opposite: it leverages the multimodal data described by a knowledge graph for its completion. We use IBM's Hyperlinked Knowledge Graph framework, which allows the graph nodes to carry arbitrary data content. This content is processed at query time by user-defined functions which are triggered by rules and whose output is used to decide the materialization of new links, completing the original graph. To demonstrate the approach, we use ML models to classify images of paintings and decide the materialization of links describing their semantics.