Open chenypic opened 6 years ago
ConvCRFs can be trained using PyTorch. Training is straight forward and can be done like any other neural network. Iterate over the training data, apply softmax cross-entropy loss and use the pytorch autograd package to backprop.
I strongly recommend that you implement your own pipeline. Having a good understanding of your training process is quite crucial in deep learning.
I am considering to make my pipeline public, however the code is currently quite messy, undocumented and will not work out of the box. I think implementing your own pipeline by following some of the pytorch tutorials is much more rewarding and easiert then trying to make mine work.
Edit: I deleted part of my earlier response to increase my overall niceness. You can find the full response in the changelog.
Thanks for your detailed response. I appreciate it, and I agree with you. I will implement my own pipeline according to your paper and my task.
Hi Marvin, I wrote a script to train the convCRF using nll loss. I treat the air plane image as a two class segmentation problem. At the beginning the training went well, the segmentation was improving, but if I keep train it, it would not converge. It reaches the min loss value then the loss stated to increase and the segmentation become worse. Finally, the result become look like the noisy unary. Could you give me some suggestions on what problem this could be? Thank you very much!
Hai
Hi Hai,
may I ask you why have you used the nll loss and not the cross entropy loss in the training?
Thanks
Hi prio1988,
I think nll loss is actually multiclass cross entropy, right? It should also work when I set the model to work on only two classes, that is background and foreground. Right?
Nll loss assume that you have already applied a logSoftMax layer on the top of your network. The multi class cross entropy loss is the torch.nn.CrossEntropyLoss. I think that probably you should use the last one. Instead I am still wondering why to apply a logsoftmax on the unary instead that just a softmax.
Oh, thank you for the very good suggestion! I will dig into the problem of logsoftmax+nll or softmax+crossEntropyLoss.I read somewhere that logsoftmax is numerically more stable than softmax. On Tuesday, August 28, 2018, 5:00:05 PM EDT, prio1988 notifications@github.com wrote:
Nll loss assume that you have already applied a logSoftMax layer on the top of your network. The multi class cross entropy loss is the torch.nn.CrossEntropyLoss. I think that probably you should use the last one. Instead I am still wondering why to apply a logsoftmax on the unary instead that just a softmax.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
If you use the crossEntropyLoss you can avoid also the softmax. It is done internally by the loss.
OK. Then that would be much better. Since the implementation of crossEntropyLoss already considered the numerical stability issues.Thank you! On Tuesday, August 28, 2018, 5:09:38 PM EDT, prio1988 notifications@github.com wrote:
If you use the crossEntropyLoss you can avoid also the softmax. It is done internally by the loss.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
I have trained it however I get the following error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Is there any one having tried training?
I have trained it however I get the following error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I have the same problem. Have you solved it?
@HqWei @qiqihaer Could you share a portion of your code for training convCRF?
There is a paper called PAC-CRF, you may find the convCRF implementation there. On Tuesday, June 2, 2020, 02:00:21 AM PDT, pvthuy notifications@github.com wrote:
@HqWei @qiqihaer Could you share a portion of your code for training convCRF?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
@SHMCU It's very helpful. Thank you very much!
@SHMCU It's very helpful. Thank you very much!
Hi, did you solve the in-place operation problem? Should we set CRF iteration step to 1 to avoid this error? I tried it on PACCRF and the same problem occured.
ConvCRFs can be trained using PyTorch. Training is straight forward and can be done like any other neural network. Iterate over the training data, apply softmax cross-entropy loss and use the pytorch autograd package to backprop.
I strongly recommend that you implement your own pipeline. Having a good understanding of your training process is quite crucial in deep learning.
I am considering to make my pipeline public, however the code is currently quite messy, undocumented and will not work out of the box. I think implementing your own pipeline by following some of the pytorch tutorials is much more rewarding and easiert then trying to make mine work.
Edit: I deleted part of my earlier response to increase my overall niceness. You can find the full response in the changelog.
Hi, I have a question about the training step with this wonderful CRF impletement. Should we set CRF iteration step=1 in training step ? And in inference step to set it bigger than 1?
Great work. Thanks for your code. Do you have a plan to publish the training implementation? I really want to follow your job.