You said that Instead of using the activation vector after the last pooling layer in the Inception Network SIFID use the internal distribution of deep features at the output of the convolutional layer just before the second pooling layer.
I guess to use feature map just before the second polling layer, the code at sidif_socre.py
sifid_values = calculate_sifid_given_paths(path1,path2,1,args.gpu!='',64,suffix)
should use 192 instead 64.
Because there is no argument to choose dims implemented code use default value 64.
I review the code and paper SinGAN
You said that Instead of using the activation vector after the last pooling layer in the Inception Network SIFID use the internal distribution of deep features at the output of the convolutional layer just before the second pooling layer.
I guess to use feature map just before the second polling layer, the code at sidif_socre.py
sifid_values = calculate_sifid_given_paths(path1,path2,1,args.gpu!='',64,suffix)
should use 192 instead 64. Because there is no argument to choose dims implemented code use default value 64.https://github.com/tamarott/SinGAN/blob/286d3cd51cc327381737844d330348ec97577e60/SIFID/sifid_score.py#L258
With 64 dims, the output of inception v3 is come from just before the first pooling layer.
https://github.com/tamarott/SinGAN/blob/286d3cd51cc327381737844d330348ec97577e60/SIFID/inception.py#L63-L67
Is this error or just my misunderstanding?...