WisconsinAIVision / ViP-LLaVA

[CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts
https://vip-llava.github.io/
Apache License 2.0
270 stars 20 forks source link

[Usage] Unfixed Random Seed and Get the Different Result in each Experiments #22

Open ia-gu opened 1 month ago

ia-gu commented 1 month ago

Describe the issue

Hi,

I am trying to fine-tune ViP-LLaVA with my own dataset. However, the performance of the model is different in each experiments, even though I fixed the random seed like below.

def set_global_seed(seed):
    os.environ['PYTHONHASHSEED'] = str(seed)
    random.seed(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)
    set_seed(seed)
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False

I found the same problem here, LLaVA 1.5, your basic model.

In your experiment, did you fix the random seed to get the same result in every experiment? If so, I would like to know how to fix it.

Thanks in advance.

mu-cai commented 1 month ago

Thanks for your question, No, I did not set the seed for my experiments.

I think given a trained checkpoint, at least for evaluation you can fix the seed.