Closed blossomzx closed 6 years ago
You have to provide more details. Did you use default config file? Any difference from the demo?
On Thu, Sep 20, 2018, 16:11 blossomzx notifications@github.com wrote:
I use the example data to train this networks and find loss isn't convergent. Why is this happening?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/YipengHu/label-reg/issues/7, or mute the thread https://github.com/notifications/unsubscribe-auth/ACd9_SMYM675IrW5wQV7Sj-jPoEZQTruks5uc6IMgaJpZM4WyNxx .
Yes,I did. I didn't change any content in that config file.
Thanks. What did you mean by "it didn't converge"? The loss didn't go down?
On Fri, 21 Sep 2018 at 05:30, zxblossom notifications@github.com wrote:
Yes,I did. I didn't change any content in that config file.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/YipengHu/label-reg/issues/7#issuecomment-423402623, or mute the thread https://github.com/notifications/unsubscribe-auth/ACd9_cLtruF4Dhp0fY6-LlPb_gtlq--Pks5udF1UgaJpZM4WyNxx .
Converge means that the loss is always go down.But the loss in the example data sometimes would be bigger than the last time.
I'm so sorry that my expression is not very professional.
Thanks for your reply.
Sorry I'm still a bit confused. What does your loss look like at the beginning of training? And what's it like after a few thousands iterations? Remember this is based on stochastic gradient descent, the loss going down is not guaranteed for each iteration.
On Fri, Sep 21, 2018, 08:11 blossomzx notifications@github.com wrote:
Converge means that the loss is always do down.But the loss in the example data sometimes would be bigger than the last time.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/YipengHu/label-reg/issues/7#issuecomment-423424597, or mute the thread https://github.com/notifications/unsubscribe-auth/ACd9_eOI9k7BEOgWG9NK2wp_2_2pCCSrks5udIMUgaJpZM4WyNxx .
Thanks, I understand it.
I use the example data to train this networks and find loss isn't convergent. Why is this happening?