bes-dev / MobileStyleGAN.pytorch

An official implementation of MobileStyleGAN in PyTorch
Apache License 2.0
672 stars 81 forks source link

About Pixel-Level Distillation Loss #25

Open zhongtao93 opened 3 years ago

zhongtao93 commented 3 years ago

Have you tried using gt['rgb'] instead of gt['img'] to distll the student network? Or the gt['rgb'] is useless.

https://github.com/bes-dev/MobileStyleGAN.pytorch/blob/2d18a80bed6be3ec0eec703cc9be50616f2401ee/core/loss/distiller_loss.py#L35

bes-dev commented 3 years ago

@zhongtao93 so, gt["rgb"] contain partial sums of the gt["img"], as we don't use aggregation of intermediate predictions like in StyleGAN2, it isn't correct to use gt["rgb"] here

zhongtao93 commented 3 years ago

Since I want use MobileStyleGAN to blend anime and real face models, like stylegan in toonify. But I found this feature become weakened, especially when I reduce the channels of model. Feature: 1) style codes lying in lower layers control coarser attributes like facial shapes, 2) middle layer codes control more localized facial features, 3) high layer codes correspond to fine details such as reflectance and texture.

bes-dev commented 3 years ago

@zhongtao93 I didn't try toonify pipeline on top of MobileStyleGAN. But if you have some experimental results it will be great if you share it.