I'm the one who wanted to use this model with different datasets.
However, I'm having trouble getting an anomaly attention map, so I want to ask for advice.
I have a question about the function in gradcam.py python file.
Like below, the function 'encode_one_hot_bath' just return mu, not using one_hot_batch.
Is this configured as intended? or not developed yet?
set the target class as one others as zero. use this vector for back prop added by Lezi
def encode_one_hot_batch(self, z, mu, logvar, mu_avg, logvar_avg):
--one_hotbatch = torch.FloatTensor(z.size()).zero()
--return mu
Plus, if this function is implemented as intended, I want to ask which part of the code conducted the (4) equation of the paper which generating anomaly attention.
I'm the one who wanted to use this model with different datasets. However, I'm having trouble getting an anomaly attention map, so I want to ask for advice.
I have a question about the function in gradcam.py python file. Like below, the function 'encode_one_hot_bath' just return mu, not using one_hot_batch. Is this configured as intended? or not developed yet?
set the target class as one others as zero. use this vector for back prop added by Lezi def encode_one_hot_batch(self, z, mu, logvar, mu_avg, logvar_avg): --one_hotbatch = torch.FloatTensor(z.size()).zero() --return mu
Plus, if this function is implemented as intended, I want to ask which part of the code conducted the (4) equation of the paper which generating anomaly attention.
Thanks,