The goal of our work is to complete missing areas of images of talking faces, exploiting information from both the visual and audio modalities. Existing image inpainting methods rely solely on visual content that doesn't always provide sufficient information for the task. To counter this, we propose a neural network that employs an encoder-decoder architecture with a bimodal fusion mechanism, thus taking into account both visual and audio content. Our proposed method demonstrates consistently superior performance over a baseline visual-only model, reaching for example up to 17% relative improvement in mean absolute error. The presented model is applicable to practical video editing tasks, such as object and overlay-text removal from talking faces, where existing lip and face generation works are not applicable as they require clean input.