Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)
As i can tell , the full UGATIT model performance relatively good. However, due to its up to 1GB giant size, we can not adopt it conveniently. I tried the light version, it perform quite bad, a lot of anime crashed obvisouly. Could the author give some advice on how to keep the UGATIT model efficient while keeping SOTA? thank you.
As i can tell , the full UGATIT model performance relatively good. However, due to its up to 1GB giant size, we can not adopt it conveniently. I tried the light version, it perform quite bad, a lot of anime crashed obvisouly. Could the author give some advice on how to keep the UGATIT model efficient while keeping SOTA? thank you.