Verg-Avesta / CounTR

CounTR: Transformer-based Generalised Visual Counting
https://verg-avesta.github.io/CounTR_Webpage/
MIT License
92 stars 9 forks source link

Cannot reproduce results on CARPK #35

Open perladoubinsky opened 11 months ago

perladoubinsky commented 11 months ago

I’ve tried to reproduce the results of finetuning on CARPK but the training seems to deteriorate the results. I trained for 1000 epochs and I get an MAE of 14.9 and RMSE of 20.21. I’ve finetuned the model from the FSC14.pth checkpoint and before finetuning I get an MAE of 10.12 and RMSE of 12.48. I have also unfrozen the encoder (in model_mae_cross). Could you give more information on how you obtained your results ? Thank you

Verg-Avesta commented 11 months ago

Hello, did you try the fine-tuned weights for CARPK procided by me? Could it produce the results as I mentioned?

perladoubinsky commented 11 months ago

Hello, yes I tested the provided weights using the script FSC_test_CARPK.py and I get the same results as you mentioned (I get 5.78 for MAE and 7.37 for RMSE)

Verg-Avesta commented 11 months ago

Hmmm I am not sure what the problem is, maybe you can try a little smaller learning rate and check whether it could improve the performance?

donggoing commented 9 months ago

@Verg-Avesta 你好,我在尝试基于本文工作进行相关改进的时候遇到了和 @perladoubinsky 类似的情况,无法复现论文中CARPK的结果,我使用仓库给出的代码和设置(blr:2e-4, wd:0.05)尝试逐个epoch对训练结果进行测试,日志见附件,看起来结果似乎一直在波动,最优结果一般在很早就出现并且很难(没有)达到论文中的结果,请问是否和你们的训练过程相似?可以指导一下如何复现该结果吗? log.txt

Verg-Avesta commented 9 months ago

Sorry for the mess I caused, it seems that I wrote the wrong hyper-parameter for CARPK.

I cannot find the accurate hyper-parameter now, but accoring to my log file, maybe you can try a learning rate between (1e-5, 1e-6), and only fine-tune for 100 epochs. This should lead to a more stable fine-tuning process and better result.

donggoing commented 9 months ago

@Verg-Avesta 十分感谢您的答复。我尝试了blr=1e-5, 5e-6, 1e-6, 训练是稳定了,但是MAE还是在10+降不下来 : ( 请问可以提供训练的日志文件或者有其它建议嘛?