heykeetae / Self-Attention-GAN

Pytorch implementation of Self-Attention Generative Adversarial Networks (SAGAN)
2.52k stars 476 forks source link

Confused by self-attention layer positioning in Discriminator #61

Open AceVenturos opened 3 years ago

AceVenturos commented 3 years ago

Hi,

I'm a bit confused, many of the implementations of SAGAN, generally based off this code base, apply the SA layers on low-levels of the Discriminator and in the same positions, relative to the Generator, towards the end of the network.

However, the original paper only mentions that the SA layers are placed where feature maps are certain sizes i.e. 64x64, so with the Discriminator the SA layers should be positioned towards the start of the network where the feature map sizes line up with the SA layers placement (thus feature map size) in the Generator?

Apologies if this unclear or if I'm missing something obvious, trying get my head around it still.

Any help is appreciated, Thanks, Jamie

xiongGPR commented 1 year ago

I have the same question. And in my test ,it had a bad performance with SA model than SNGAN. I will further test this problem.