Closed ManishAradwad closed 1 year ago
I solved this error by specifying the noise-mode argument='imagenet'. But I want to now change the input size of the images. There is no input size argument available. Could you please tell me where can I change this parameter?
Thanks @liznerski for the answer.
Hi @liznerski, for training with ground truth maps, what should the normal training images' masks be like? Should they be just plain black images? Since you had mentioned we should use 255 for anomalous regions, I had this doubt.
Hey. A similar question arose in #57:
PS: You don't need any normal ground-truth maps. The code automatically creates those as completely black images. Missing anomalous training ground-truth maps are also automatically interpolated as completely white images. So you don't need a training ground-truth map for every sample. However, you need all ground-truth maps for anomalous test images. Otherwise, the code skips computing the pixel-wise AUC.
If you have some ground-truth maps available for training, they should have white pixels (value=1) for anomalous regions and black pixels (value=0) for normal regions.
@liznerski assert len(set(gts.reshape(-1).tolist())) <= 2, 'training process assumes zero-one gtmaps'
Does this error mean the ground truth maps I'm using are not in [0, 1] range?
Actually not in {0,1}. They need to be binary.
@liznerski for the first three batches of first epoch, I'm getting NaN values for errors. Is this normal or there's some issue with my training setup?
Oh that's alright. It's because the logger outputs a moving average and that is None as long as the windows isn't filled.
@liznerski transforms.Normalize(mean, std)
Is this transform necessary? Can I use some other values for normalisation, or can I directly omit the normalisation before training?
Per default, the code computes the mean and std of the training data and uses them in a transforms.Normalize(mean, std)
as part of the data preprocessing pipeline. You can customize this behavior to your liking. I don't see a reason why FCDD shouldn't work with other kinds of normalization strategies (or even without normalization).
While training the fcdd using run_custom.py I'm getting this error:
FileNotFoundError: [Errno 2] No such file or directory: '/home/OneClass/fcdd/data/datasets/imagenet22k/fall11_whole_extracted'
I guess for training on custom data, ImageNet22k is used as an OE dataset. I want to change this to ImageNet1k dataset. I have downloaded and placed the appropriate files in the
datasets
folder. @liznerski Could you please help me how to change the OE dataset to Imagenet1k?