Cogito2012 / DEAR

[ICCV 2021 Oral] Deep Evidential Action Recognition
Apache License 2.0
121 stars 18 forks source link

On the issue of random seeds in OOD tests. #14

Closed YeungChiu closed 7 months ago

YeungChiu commented 8 months ago

Hello. Your work has been very valuable to our research and we appreciate it.

We found a small problem during our research that we would like your guidance on. After the closed set dataset is trained, we need to perform OOD detection on the model to get the threshold. In this process, we found that the same experimental settings get different results, after identifying that the problem may be in SEED, as shown in the following code, which is in the python file ./DEAR/experiments/get_threshold.py.

def parse_args():
    parser = argparse.ArgumentParser(description='MMAction2 test')
    # model and data config
    parser.add_argument('--config', help='test config file path')
    parser.add_argument('--checkpoint', help='checkpoint file/url')
    parser.add_argument('--uncertainty', default='BALD', choices=['BALD', 'Entropy', 'EDL'], help='the uncertainty estimation method')
    parser.add_argument('--train_data', help='the split file of in-distribution training data')
    parser.add_argument('--forward_pass', type=int, default=10, help='the number of forward passes')
    parser.add_argument('--batch_size', type=int, default=8, help='the testing batch size')
    # env config
    parser.add_argument('--device', type=str, default='cuda:0', help='CPU/CUDA device option')
    parser.add_argument('--result_prefix', help='result file prefix')
    args = parser.parse_args()
    return args

The code parser.add_argument('--forward_pass', type=int, default=10, help='the number of forward passes') and the relative code are only available in run_stochastic_inference(),and our setting of ‘uncertainy’ is EDL, which uses run_evidence_inference().

So we wanted to know where are the possible reasons that would lead to different experimental results through one experimental setup.

Looking forward to your answer.

Cogito2012 commented 8 months ago

@YeungChiu Thanks for your interest in this work!

For the non-EDL settings, we have to use different random seeds to do multiple stochastic inferences, while for the EDL uncertainty setting, probably you may try to fix the random seed similar to experiments/ood_detection.py#L109, which I missed in the experiments/get_threshold.py. However, since EDL uncertainty is deterministic, the EDL outcomes should not change too much for multiple runs.

To fix the random seed, you may also try this one:

def set_deterministic(seed):
    random.seed(seed)
    os.environ['PYTHONHASHSEED'] = str(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)  # if you are using multi-GPU.
    torch.backends.cudnn.deterministic = True
YeungChiu commented 7 months ago

@Cogito2012 I see, I'll take your comments and modify the code, thank you very much for your reply.