Closed YeungChiu closed 7 months ago
@YeungChiu Thanks for your interest in this work!
For the non-EDL settings, we have to use different random seeds to do multiple stochastic inferences, while for the EDL uncertainty setting, probably you may try to fix the random seed similar to experiments/ood_detection.py#L109, which I missed in the experiments/get_threshold.py
. However, since EDL uncertainty is deterministic, the EDL outcomes should not change too much for multiple runs.
To fix the random seed, you may also try this one:
def set_deterministic(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # if you are using multi-GPU.
torch.backends.cudnn.deterministic = True
@Cogito2012 I see, I'll take your comments and modify the code, thank you very much for your reply.
Hello. Your work has been very valuable to our research and we appreciate it.
We found a small problem during our research that we would like your guidance on. After the closed set dataset is trained, we need to perform OOD detection on the model to get the threshold. In this process, we found that the same experimental settings get different results, after identifying that the problem may be in SEED, as shown in the following code, which is in the python file
./DEAR/experiments/get_threshold.py
.The code
parser.add_argument('--forward_pass', type=int, default=10, help='the number of forward passes')
and the relative code are only available inrun_stochastic_inference()
,and our setting of ‘uncertainy’ is EDL, which usesrun_evidence_inference()
.So we wanted to know where are the possible reasons that would lead to different experimental results through one experimental setup.
Looking forward to your answer.