About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICASSP 2020
Conference paper
Audio-Assisted Image Inpainting for Talking Faces
Abstract
The goal of our work is to complete missing areas of images of talking faces, exploiting information from both the visual and audio modalities. Existing image inpainting methods rely solely on visual content that doesn't always provide sufficient information for the task. To counter this, we propose a neural network that employs an encoder-decoder architecture with a bimodal fusion mechanism, thus taking into account both visual and audio content. Our proposed method demonstrates consistently superior performance over a baseline visual-only model, reaching for example up to 17% relative improvement in mean absolute error. The presented model is applicable to practical video editing tasks, such as object and overlay-text removal from talking faces, where existing lip and face generation works are not applicable as they require clean input.