xuebinqin / BASNet

Code for CVPR 2019 paper. BASNet: Boundary-Aware Salient Object Detection
MIT License
1.35k stars 249 forks source link

Ask #26

Closed krz02 closed 4 years ago

krz02 commented 4 years ago

i want to ask about this:

if ite_num % 2000 == 0: # save model every 2000 iterations

        torch.save(net.state_dict(), model_dir + "basnet_bsi_itr_%d_train_%3f_tar_%3f.pth" % (ite_num, running_loss / ite_num4val, running_tar_loss / ite_num4val))
        running_loss = 0.0
        running_tar_loss = 0.0
        net.train()  # resume train
        ite_num4val = 0

why running_loss , tar_loss and ite_num4val is set to 0 after saving the model ? thankss

xuebinqin commented 4 years ago

sorry for the late reply. It is just reset to count the training loss in every 2000 iterations. It shows the loss trend better than counting all of the previous training losses.

On Sun, Dec 1, 2019 at 11:46 PM evv21 notifications@github.com wrote:

i want to ask about this:

if ite_num % 2000 == 0: # save model every 2000 iterations

    torch.save(net.state_dict(), model_dir + "basnet_bsi_itr_%d_train_%3f_tar_%3f.pth" % (ite_num, running_loss / ite_num4val, running_tar_loss / ite_num4val))
    running_loss = 0.0
    running_tar_loss = 0.0
    net.train()  # resume train
    ite_num4val = 0

why running_loss , tar_loss and ite_num4val is set to 0 after saving the model ? thankss

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=ADSGORIZHBCZ2G3HLP3ECHLQWSVMZA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H5FJEXA, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORLA5W2E6HQLRVETMVTQWSVMZANCNFSM4JTRBXPQ .

-- Xuebin Qin PhD Candidate Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/

krz02 commented 4 years ago

Thank you

On Fri, Dec 6, 2019, 05:19 Xuebin Qin notifications@github.com wrote:

sorry for the late reply. It is just reset to count the training loss in every 2000 iterations. It shows the loss trend better than counting all of the previous training losses.

On Sun, Dec 1, 2019 at 11:46 PM evv21 notifications@github.com wrote:

i want to ask about this:

if ite_num % 2000 == 0: # save model every 2000 iterations

torch.save(net.state_dict(), model_dir + "basnet_bsiitr%dtrain%3ftar%3f.pth" % (ite_num, running_loss / ite_num4val, running_tar_loss / ite_num4val)) running_loss = 0.0 running_tar_loss = 0.0 net.train() # resume train ite_num4val = 0

why running_loss , tar_loss and ite_num4val is set to 0 after saving the model ? thankss

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub < https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=ADSGORIZHBCZ2G3HLP3ECHLQWSVMZA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H5FJEXA , or unsubscribe < https://github.com/notifications/unsubscribe-auth/ADSGORLA5W2E6HQLRVETMVTQWSVMZANCNFSM4JTRBXPQ

.

-- Xuebin Qin PhD Candidate Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=AIJ3MDCZVHEC7JNPSXWSUDDQXF47NA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCLD6Q#issuecomment-562344442, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIJ3MDHF7J36NCHDRNBCPCDQXF47NANCNFSM4JTRBXPQ .

xuebinqin commented 4 years ago

There will be no problem here.

It doesn't matter. If you know that your calculation is able to represent the training process that should be fine. You can use whatever you want.

It doesn't matter.

On Thu, Dec 5, 2019 at 6:46 PM evv21 notifications@github.com wrote:

  • If i change 2000 into another number, called 500 iteration. That's no problem right ?

  • So that means, loss value is better if it reset rather than summed up the previous loss? But why do you have to reset it ? Because from what i have learned. Isn't loss calculation is calculated from the first iteration to the last? Any explanation ?

  • last question, you put ite_num4val =0 after net.train(). Why put it there? Any difference if i placed above net.train() ?

Thank you

On Fri, Dec 6, 2019, 05:19 Xuebin Qin notifications@github.com wrote:

sorry for the late reply. It is just reset to count the training loss in every 2000 iterations. It shows the loss trend better than counting all of the previous training losses.

On Sun, Dec 1, 2019 at 11:46 PM evv21 notifications@github.com wrote:

i want to ask about this:

if ite_num % 2000 == 0: # save model every 2000 iterations

torch.save(net.state_dict(), model_dir + "basnet_bsiitr%dtrain%3ftar%3f.pth" % (ite_num, running_loss / ite_num4val, running_tar_loss / ite_num4val)) running_loss = 0.0 running_tar_loss = 0.0 net.train() # resume train ite_num4val = 0

why running_loss , tar_loss and ite_num4val is set to 0 after saving the model ? thankss

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <

https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=ADSGORIZHBCZ2G3HLP3ECHLQWSVMZA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H5FJEXA

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/ADSGORLA5W2E6HQLRVETMVTQWSVMZANCNFSM4JTRBXPQ

.

-- Xuebin Qin PhD Candidate Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub < https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=AIJ3MDCZVHEC7JNPSXWSUDDQXF47NA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCLD6Q#issuecomment-562344442 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/AIJ3MDHF7J36NCHDRNBCPCDQXF47NANCNFSM4JTRBXPQ

.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=ADSGORNL6CZ5P4ABLPAUPFLQXGVIDA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCXXRY#issuecomment-562396103, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORNZZNWJFTBS4B2GFXDQXGVIDANCNFSM4JTRBXPQ .

-- Xuebin Qin PhD Candidate Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/

krz02 commented 4 years ago

So that means, loss value is better if it reset rather than summed up the previous loss? But why do you have to reset it ? Because from what i have learned. Isn't loss calculation is calculated from the first iteration to the last? Any explanation ?

It doesn't matter. If you know that your calculation is able to represent the training process that should be fine. You can use whatever you want.

* Therefore, if i comment (#) the running_loss=0, running _tar_loss=0 and ite_num4val =0 . It is okay or not? Because i already try, if i comment these variable, it affects to the loss value sorry if wrong. I mean, there is a different value between uncomment and comment these variable.

On Fri, Dec 6, 2019, 08:58 Xuebin Qin notifications@github.com wrote:

  • If i change 2000 into another number, called 500 iteration. That's no problem right ?

There will be no problem here.

  • So that means, loss value is better if it reset rather than summed up the previous loss? But why do you have to reset it ? Because from what i have learned. Isn't loss calculation is calculated from the first iteration to the last? Any explanation ?

It doesn't matter. If you know that your calculation is able to represent the training process that should be fine. You can use whatever you want.

  • last question, you put ite_num4val =0 after net.train(). Why put it there? Any difference if i placed above net.train() ?

It doesn't matter.

On Thu, Dec 5, 2019 at 6:46 PM evv21 notifications@github.com wrote:

  • If i change 2000 into another number, called 500 iteration. That's no problem right ?

  • So that means, loss value is better if it reset rather than summed up the previous loss? But why do you have to reset it ? Because from what i have learned. Isn't loss calculation is calculated from the first iteration to the last? Any explanation ?

  • last question, you put ite_num4val =0 after net.train(). Why put it there? Any difference if i placed above net.train() ?

Thank you

On Fri, Dec 6, 2019, 05:19 Xuebin Qin notifications@github.com wrote:

sorry for the late reply. It is just reset to count the training loss in every 2000 iterations. It shows the loss trend better than counting all of the previous training losses.

On Sun, Dec 1, 2019 at 11:46 PM evv21 notifications@github.com wrote:

i want to ask about this:

if ite_num % 2000 == 0: # save model every 2000 iterations

torch.save(net.state_dict(), model_dir + "basnet_bsiitr%dtrain%3ftar%3f.pth" % (ite_num, running_loss / ite_num4val, running_tar_loss / ite_num4val)) running_loss = 0.0 running_tar_loss = 0.0 net.train() # resume train ite_num4val = 0

why running_loss , tar_loss and ite_num4val is set to 0 after saving the model ? thankss

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <

https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=ADSGORIZHBCZ2G3HLP3ECHLQWSVMZA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H5FJEXA

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/ADSGORLA5W2E6HQLRVETMVTQWSVMZANCNFSM4JTRBXPQ

.

-- Xuebin Qin PhD Candidate Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <

https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=AIJ3MDCZVHEC7JNPSXWSUDDQXF47NA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCLD6Q#issuecomment-562344442

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/AIJ3MDHF7J36NCHDRNBCPCDQXF47NANCNFSM4JTRBXPQ

.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=ADSGORNL6CZ5P4ABLPAUPFLQXGVIDA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCXXRY#issuecomment-562396103 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/ADSGORNZZNWJFTBS4B2GFXDQXGVIDANCNFSM4JTRBXPQ

.

-- Xuebin Qin PhD Candidate Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=AIJ3MDE2XVNMMDVLME7FBPLQXGWTVA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCYMZQ#issuecomment-562398822, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIJ3MDGGCUNEXE5FCTAM2ZDQXGWTVANCNFSM4JTRBXPQ .

xuebinqin commented 4 years ago

Because the loss value at the early iteration is greater. With the training progress, the loss value gonna be smaller. If you compute the average loss from the first iteration which has greater loss values, the loss decreasing trend will be less significant than computing that within the last 2000 iterations. Both are ok. I prefer to compute that within the last 2000 iterations.

On Thu, Dec 5, 2019 at 7:59 PM evv21 notifications@github.com wrote:

So that means, loss value is better if it reset rather than summed up the previous loss? But why do you have to reset it ? Because from what i have learned. Isn't loss calculation is calculated from the first iteration to the last? Any explanation ?

It doesn't matter. If you know that your calculation is able to represent the training process that should be fine. You can use whatever you want.

* Therefore, if i comment (#) the running_loss=0, running _tar_loss=0 and ite_num4val =0 . It is okay or not? Because i already try, if i comment these variable, it affects to the loss value sorry if wrong. I mean, there is a different value between uncomment and comment these variable.

On Fri, Dec 6, 2019, 08:58 Xuebin Qin notifications@github.com wrote:

  • If i change 2000 into another number, called 500 iteration. That's no problem right ?

There will be no problem here.

  • So that means, loss value is better if it reset rather than summed up the previous loss? But why do you have to reset it ? Because from what i have learned. Isn't loss calculation is calculated from the first iteration to the last? Any explanation ?

It doesn't matter. If you know that your calculation is able to represent the training process that should be fine. You can use whatever you want.

  • last question, you put ite_num4val =0 after net.train(). Why put it there? Any difference if i placed above net.train() ?

It doesn't matter.

On Thu, Dec 5, 2019 at 6:46 PM evv21 notifications@github.com wrote:

  • If i change 2000 into another number, called 500 iteration. That's no problem right ?

  • So that means, loss value is better if it reset rather than summed up the previous loss? But why do you have to reset it ? Because from what i have learned. Isn't loss calculation is calculated from the first iteration to the last? Any explanation ?

  • last question, you put ite_num4val =0 after net.train(). Why put it there? Any difference if i placed above net.train() ?

Thank you

On Fri, Dec 6, 2019, 05:19 Xuebin Qin notifications@github.com wrote:

sorry for the late reply. It is just reset to count the training loss in every 2000 iterations. It shows the loss trend better than counting all of the previous training losses.

On Sun, Dec 1, 2019 at 11:46 PM evv21 notifications@github.com wrote:

i want to ask about this:

if ite_num % 2000 == 0: # save model every 2000 iterations

torch.save(net.state_dict(), model_dir + "basnet_bsiitr%dtrain%3ftar%3f.pth" % (ite_num, running_loss / ite_num4val, running_tar_loss / ite_num4val)) running_loss = 0.0 running_tar_loss = 0.0 net.train() # resume train ite_num4val = 0

why running_loss , tar_loss and ite_num4val is set to 0 after saving the model ? thankss

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <

https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=ADSGORIZHBCZ2G3HLP3ECHLQWSVMZA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H5FJEXA

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/ADSGORLA5W2E6HQLRVETMVTQWSVMZANCNFSM4JTRBXPQ

.

-- Xuebin Qin PhD Candidate Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <

https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=AIJ3MDCZVHEC7JNPSXWSUDDQXF47NA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCLD6Q#issuecomment-562344442

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/AIJ3MDHF7J36NCHDRNBCPCDQXF47NANCNFSM4JTRBXPQ

.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub <

https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=ADSGORNL6CZ5P4ABLPAUPFLQXGVIDA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCXXRY#issuecomment-562396103

, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/ADSGORNZZNWJFTBS4B2GFXDQXGVIDANCNFSM4JTRBXPQ

.

-- Xuebin Qin PhD Candidate Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub < https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=AIJ3MDE2XVNMMDVLME7FBPLQXGWTVA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGCYMZQ#issuecomment-562398822 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/AIJ3MDGGCUNEXE5FCTAM2ZDQXGWTVANCNFSM4JTRBXPQ

.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NathanUA/BASNet/issues/26?email_source=notifications&email_token=ADSGORM5K3TAURWIGS7BYETQXG5ZDA5CNFSM4JTRBXP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGC3QZI#issuecomment-562411621, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORLZGWU5M4V5HCV2LXLQXG5ZDANCNFSM4JTRBXPQ .

-- Xuebin Qin PhD Candidate Department of Computing Science University of Alberta, Edmonton, AB, Canada Homepage:https://webdocs.cs.ualberta.ca/~xuebin/