LZGMatrix / BOP19_CDPN_2019ICCV

56 stars 13 forks source link

A question about training #4

Closed EasonEdison closed 4 years ago

EasonEdison commented 4 years ago

When I refer to test.py to code train.py , I find I can't make it. My loss.backward() can't work!

            output_coor_x_ = output_coor_x_.squeeze()
            output_coor_y_ = output_coor_y_.squeeze()
            output_coor_z_ = output_coor_z_.squeeze()

           ####

            output_coor_ = torch.stack([torch.argmax(output_coor_x_, axis=0),
                                     torch.argmax(output_coor_y_, axis=0),
                                     torch.argmax(output_coor_z_, axis=0)], axis=2)
            output_coor_[output_coor_ == cfg.network.coor_bin] = 0
            output_coor_ = 2.0 * output_coor_.float() / (63.0-1.0) - 1.0      # [-1,1]

I try to use variable with before '####', it work; but when I use variable after '####', it not work and output as follow:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I am sure that parameters's requires_grad=True in the model.

Maybe I shouldn't use torch.argmax() ?

Could help me solve this problem or tell me how you trian it?

Thank you!

LZGMatrix commented 4 years ago

torch.argmax() is nondifferentiable, so you need to take the loss before torch.argmax().

guan2000910 commented 1 year ago

@EasonEdison Hi, how do I generate my own training files? Would you be able to show me your train.py?