AkiraTOSEI / ML_papers

ML_paper_summary(in Japanese)
4 stars 1 forks source link

TinyGAN: Distilling BigGAN for Conditional Image Generation #135

Open AkiraTOSEI opened 4 years ago

AkiraTOSEI commented 4 years ago

TL;DR

A study to distill BigGANs. The latent and class variable input and image output pairs are obtained beforehand and treated as a dataset to reduce memory usage during training, and a small student model is trained with three targets: the L1 distance per pixel, the difference between the hidden layers of D, and the usual adversarial losses. Although the performance is slightly degraded, they succeeded in reducing the number of parameters compared to BigGANs. image

image

Why it matters:

Paper URL

https://arxiv.org/abs/2009.13829

Submission Dates(yyyy/mm/dd)

2020/09/29

Authors and institutions

Ting-Yun Chang, Chi-Jen Lu

Methods

Results

Comments