Closed rinawhale closed 5 years ago
Hi, yes this is normal. The losses are not averaged by tensor size, and that's why the large numbers. Also this is the reason that we use a very small learning rate.
Thank you very much for your reply. I still have two questions:
Hello,
To answer your questions:
For the parent network, we did not optimize on the training iterations. For online training, once the training loss does not decrease, you could stop training. We found out that approx. 2000 iterations are optimal for a sequence of DAVIS.
We are planning to, but this requires merging with the new Mask RCNN PyTorch version and re-running all experiments, so it will take some time before we release it. Sorry.
HI, thanks for your sharing, when I run train_online.py, the loss is 403.75 and epoch = 10000,is it normal? Similarly, when I run the code train_parent.py, the network does not converge. Can you give me some suggestions?