YipengHu / label-reg

(This repo is no longer up-to-date. Any updates will be at https://github.com/DeepRegNet/DeepReg/) A demo of the re-factored label-driven registration code, based on "Weakly-supervised convolutional neural networks for multimodal image registration"
Apache License 2.0
118 stars 32 forks source link

Loss isn't convergent #7

Closed blossomzx closed 6 years ago

blossomzx commented 6 years ago

I use the example data to train this networks and find loss isn't convergent. Why is this happening?

YipengHu commented 6 years ago

You have to provide more details. Did you use default config file? Any difference from the demo?

On Thu, Sep 20, 2018, 16:11 blossomzx notifications@github.com wrote:

I use the example data to train this networks and find loss isn't convergent. Why is this happening?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/YipengHu/label-reg/issues/7, or mute the thread https://github.com/notifications/unsubscribe-auth/ACd9_SMYM675IrW5wQV7Sj-jPoEZQTruks5uc6IMgaJpZM4WyNxx .

zxblossom commented 6 years ago

Yes,I did. I didn't change any content in that config file.

YipengHu commented 6 years ago

Thanks. What did you mean by "it didn't converge"? The loss didn't go down?

On Fri, 21 Sep 2018 at 05:30, zxblossom notifications@github.com wrote:

Yes,I did. I didn't change any content in that config file.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/YipengHu/label-reg/issues/7#issuecomment-423402623, or mute the thread https://github.com/notifications/unsubscribe-auth/ACd9_cLtruF4Dhp0fY6-LlPb_gtlq--Pks5udF1UgaJpZM4WyNxx .

blossomzx commented 6 years ago

Converge means that the loss is always go down.But the loss in the example data sometimes would be bigger than the last time.

blossomzx commented 6 years ago

I'm so sorry that my expression is not very professional.

blossomzx commented 6 years ago

Thanks for your reply.

YipengHu commented 6 years ago

Sorry I'm still a bit confused. What does your loss look like at the beginning of training? And what's it like after a few thousands iterations? Remember this is based on stochastic gradient descent, the loss going down is not guaranteed for each iteration.

On Fri, Sep 21, 2018, 08:11 blossomzx notifications@github.com wrote:

Converge means that the loss is always do down.But the loss in the example data sometimes would be bigger than the last time.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/YipengHu/label-reg/issues/7#issuecomment-423424597, or mute the thread https://github.com/notifications/unsubscribe-auth/ACd9_eOI9k7BEOgWG9NK2wp_2_2pCCSrks5udIMUgaJpZM4WyNxx .

blossomzx commented 6 years ago

Thanks, I understand it.