Official PyTorch implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
MIT License
2.51k
stars
476
forks
source link
conv-adaILN-conv-adaILN, Why is there no activation function layer in the middle? #36
I found that there was no activation function layer where each block was connected.
Such as conv--adaILN--conv--adaILN, Why do you do this? In order to achieve better performance?
I found that there was no activation function layer where each block was connected. Such as conv--adaILN--conv--adaILN, Why do you do this? In order to achieve better performance?