According to PyTorch docs, the Normalize transform expects the mean and std for every channel.
CLASS torchvision.transforms.Normalize(mean, std, inplace=False)
But currently, this implementation of Deep SVDD passes the "min" value in place of "mean" and "max - min" value in place of std. And that too, for only one channel even in case of CIFAR-10.
According to PyTorch docs, the Normalize transform expects the mean and std for every channel.
But currently, this implementation of Deep SVDD passes the "min" value in place of "mean" and "max - min" value in place of std. And that too, for only one channel even in case of CIFAR-10.
from datasets/cifar10.py line 35
Is this intentional or a real issue?