Closed qtw1998 closed 3 years ago
Hmm, I am not sure about the details, but because the #params is much smaller, you generally need to train longer.
Hmm, I am not sure about the details, but because the #params is much smaller, you generally need to train longer.
I've trained it for quite a long time 3days plus.... and the loss never get down below 0.4
Hmm, I am not sure about the details, but because the #params is much smaller, you generally need to train longer.
could you please share some tricks to attach better training courses like hyperparams settings like anchor size changed after measuring the datasetting by kmeans.
Hmm, I am not sure about the details, but because the #params is much smaller, you generally need to train longer.
could you please share some tricks to attach better training courses like hyperparams settings like anchor size changed after measuring the datasetting by kmeans.
Did you got better results by changing some hparam? Could you please share with us? I'm having the same 'issue'.
@qtw1998 Any suggestion on hparam changes?
@qtw1998 Any suggestion on hparam changes?
I've changed to pytorch version..
Low params size caused THE Hyperparameter sensitive, so the very little fluctuation in params settings would bring a low ap result.... Powerful computing sources are needed to find the proper settings.🤯