Hello,
Thanks for your work! I have one question for the pre-trained Resnet-10 backbone: did you use different hyper parameters to train this backbone or we can just get the same by running the code "train_baseline.py" where I see you use default Adam optimizer training 400 epochs?
Hello, Thanks for your work! I have one question for the pre-trained Resnet-10 backbone: did you use different hyper parameters to train this backbone or we can just get the same by running the code "train_baseline.py" where I see you use default Adam optimizer training 400 epochs?