amazon-science / siam-mot

SiamMOT: Siamese Multi-Object Tracking
Apache License 2.0
477 stars 61 forks source link

About Train model #37

Open defense81 opened 2 years ago

defense81 commented 2 years ago

I have trained the model many times , but the gap about 3-4% on MOT17 train set is always exist. I train the model in the four gpus env,where base lr 0.01 、steps=(20000,30000) total iters are 35k, and max_size_train is 1200. I want to know that if the parameter of max_size_train has a great impact on training ? what's more, can you give me some tips about training the model ? I appreciate your reply to my question. tahnks.

bingshuai2019 commented 2 years ago

My initial guess is that the gap may comes from the fact you use "max_size_train == 1200". I haven't tested it with model training on 4 GPUs, and I don't think that should be the major factor for the result discrepancy. It would be helpful if you train with the provided configurations configs/DLA_34_FPN_EMM_MOT17.yaml