Few-shot adversarial learning of realistic neural talking head models
A new paper and video demonstration have been published of a technique for rendering animated talking heads from very few still photos of a subject. According to the summary, “several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. Here, we present a system with such few-shot capability”. We, in this case, is Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, Victor Lempitsky from Samsung AI Lab. Watch a demonstration on YouTube and read the paper on Arxiv.org.