facebookresearch / co-tracker

CoTracker is a model for tracking any point (pixel) on a video.
https://co-tracker.github.io/
Other
3.91k stars 251 forks source link

Significant Increase in Loss when Adjusting traj_per_sample Parameter #97

Open sinkers-lan opened 4 months ago

sinkers-lan commented 4 months ago

I am currently performing full fine-tuning. When I attempt to adjust the traj_per_sample parameter from 768 to 384 during training, the average loss approximately doubles. When I adjust the traj_per_sample parameter from 768 to 256, the average loss increases by about three times.

After observing this phenomenon, I carefully reviewed the code for the loss function and noticed that the loss is divided by N at the end:

total_balanced_loss += balanced_loss / float(N)

This line of code can be found here.

Similarly,

total_flow_loss += flow_loss / float(N)

This line of code can be found here.

I believe this might be the cause of the aforementioned increase in loss. This is because before dividing the loss by N, the reduce_masked_mean function already computes the mean across various dimensions. Dividing by N again leads to a larger loss when N is smaller.

I think this might be a logical error in the code. Your guidance on this issue would be greatly appreciated.

nikitakaraevv commented 4 months ago

Hi @sinkers-lan, thank you for catching this! This is indeed a logical error. We'll fix it in the next version that's coming up.

G1tQuality commented 4 months ago

Hi @sinkers-lan, does full fine-tuning mean that training on new datasets which is on other field? I read your comments under another issue about prepare kubric training datasets. Could you please give me some advice about full fine-tuning for CoTraker? If I wanna apply CoTracker on my own datasets, is it necessary for me to train it on kubric? Or just start "full fine-tuning"? I apologize for causing you any inconvenience as a noob in deep learning.