Closed Zhengsh123 closed 1 week ago
Hi,
Thanks for your feedback! I checked the code and found a small bug. We have fixed it. We are sorry for our mistake. Specifically, replace Line #57
param_max += param_abs#= torch.where(param_abs>param_max, param_abs, param_max)
by:
param_max = torch.where(param_abs>param_max, param_abs, param_max)
Thanks again for pointing this problem out!
Thank you for your prompt response. This modification has been very effective.
I'm sorry, I encountered some unresolved issues during the reproduction process. When reproducing the results of Beit3, I tested on the COCO retrieval task using the default parameters, and the results were as follows.
The results for testing the fine-tuned model are as follows.
Could it be that the default parameters need some adjustments? Also, do the parameters for the other tasks (so far I’ve only tested IN1K, which aligns with the results in the paper) need adjustments as well?