Open MRYUHUI opened 4 days ago
Thanks for trying our code. We have included the full logs for both MTL-AQA and FineDiving datasets in the ./aqa_logs folder. Could you please share the command you tried and the configuration (config file) you tried?
Thanks for trying our code. We have included the full logs for both MTL-AQA and FineDiving datasets in the ./aqa_logs folder. Could you please share the command you tried and the configuration (config file) you tried?
Thanks for reply. I checked your logs in your project, and they match the results reported in your paper. However, due to hardware limitations, I modified the training and testing batch sizes to 2. After making this change, I ran the following command: "python -u train.py configs/mtl_aqa/deter_mtl_diving_text_data_query.yaml"
The training of the model is affected by changes in batch size. The ideal batch size is set in the config (yaml) file. If you reduce the batch size to as low as 2, you will need to also change some other parameters (such as reducing the learning rate in the config file) to get results similar to ours.
Thank you for your response and guidance! I understand that reducing the batch size may affect the training dynamics. I will try adjusting the learning rate accordingly to better match the reduced batch size.
Could you please provide any specific recommendations for modifying the learning rate or other parameters when using a smaller batch size (e.g., batch size = 2)? For example, should the learning rate scale linearly or in another proportion?
I appreciate your help, and I will re-run the experiments with these adjustments. Thank you again!
Hello, thank you for sharing your code and great work! I downloaded the repository and followed the instructions to reproduce the results of RICA2 on the MTL-AQA dataset. However, the SRCC result I obtained (92.4220) is significantly lower than the reported value (96.20) in the paper.
Could you please provide guidance?