Closed jinzishuai closed 4 years ago
Let me clear up the difference between 1) and 2): In 1) the pictures are already rotated randomly. As you can see here the same transformations as in training are used. The only difference to 2) is that there the rotations are not random, but the angles are fixed (regular steps from 0 to 360 degrees). Otherwise one would not know how to retransform the randomly rotated gradient map. Example for 2) and n_transforms_test = 4: Fixed rotation angles are [0, 90, 180, 270].
You are absolutely right. Thank you very much for the clarification.
In 1) the pictures are already rotated randomly. As you can see here the same transformations as in training are used.
Hi @marco-rudolph . I don't understand why you use random rotation for the test dataset. Shouldn't test set output be deterministic, or did I miss something?
I have found that it doesn't really make a difference whether the rotations are random or deterministic when testing - especially with 64 transformations.
Hi, there,
I found in the code, the evaluation takes two steps
z = model(inputs)
,loss
andanomaly_score
export_gradient_maps
to get the gradient map for locationzation.It seem that the inputs used in the two cases are slightly different:
testloader.dataset.get_fixed = True
in https://github.com/marco-rudolph/differnet/blob/master/localization.py#L38The consequence is that the data used to calculate anomaly_socre are not averaged out among different rotations while the gradients are. Is this a problem? Shouldn't we be consistent and add the rotation tranformation to even the first step, ie, set
testloader.dataset.get_fixed = True
always? Maybe there is a reason to do this differently by design?Thank you very much