RamGAN: Region Attentive Morphing GAN for
Region-Level Makeup Transfer
European Conference on Computer Vision (ECCV)
Jianfeng Xiang1,2,3,4 Junliang Chen1,2,3,4 Wenshuang Liu1,2,3,4 Xianxu Hou1,2,3,4 Linlin Shen1,2,3,4
1Computer Vision Institute, School of Computer Science & Software Engineering, Shenzhen University
2Shenzhen Institute of Artificial Intelligence & Robotics for Society
3Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, China
4National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University,China
Figure 1 : Half face makeup (left) and step-by-step makeup (right).
In this paper, we propose a region adaptive makeup transfer GAN, called RamGAN, for precise region-level makeup transfer. Compared to face-level transfer methods, our RamGAN uses spatial-aware Region Attentive Morphing Module (RAMM) to encode Region Attentive Matrices (RAMs) for local regions like lips, eye shadow and skin. After that, the Region Style Injection Module (RSIM) is applied to RAMs produced by RAMM to obtain two Region Makeup Tensors, γ and β, which are subsequently added to the feature map of source image to transfer the makeup. As attention and makeup styles are calculated for each region, RamGAN can achieve better disentangled makeup transfer for different facial regions. When there are significant pose and expression variations between source and reference, RamGAN can also achieve better transfer results, due to the integration of spatial information and region-level correspondence. Experimental results are conducted on public datasets like MT, M-Wild and Makeup datasets, both visual and quantitative results and user study suggest that our approach achieves better transfer results than state-of-the-art methods like BeautyGAN, BeautyGlow, DMT, CPM and PSGAN.
Figure 2 : An overview of PSGAN (a) and our proposed RamGAN (b) .
Figure 3 : Detailed architecture of the proposed RAMM and RSIM.
Figure 4 : The visualization of attention map on reference image. Given a specific red point in source image, we calculate the attentive values for pixels in the reference face and visualize the attention map.
Figure 5 : (a) Source and reference images. (b) Example channels of γ and β. (c) An example of Region Matching Loss.
Figure 6 : Comparasion of frontal face makeup transfer with several state-of-the-art methods.
Figure 7 : Makeup transfer between faces with large pose differences.
Figure 8 : Step-by-step makeup results.
Figure 9 : The performance of RamGAN without RML (4th column) and without RAMM (5th column).
Figure 10 : The mixed trasfer of different makeup styles. First rows are different styles. Second rows are source image, region makeup transfer results (skin, lips and eye shadow), and mixed result.