Open Angel-Jia opened 6 years ago
It turns out that b,c,d can not be CudaTensor, or will get Segmentation fault
. To calculate the loss, you should write like this:
a = torch.from_numpy(a).cuda()
b = torch.from_numpy(b)
c = torch.from_numpy(c)
d = torch.from_numpy(d)
I suggest the author should put the usage in README.md. It costs me two days to find the problem. @SeanNaren
@Mabinogiysk Hi, did the current version of warp-ctc work well for your project? I use the crnn.pytorch pipeline (https://github.com/meijieru/crnn.pytorch) but the training loss does not decrease.
My warp_ctc works fine.
Thank you, @Mabinogiysk. You saved me a lot of trouble.
I have some variable saved from crnn-pytorch, which runs on gpu and uses warp-ctc. Then I use those variables in my code, I can run it on cpu and the results are correct. But when I run it on gpu, I got
Segmentation fault
error. This is my code which runs on gpu:I got
Segmentation fault
. When delete all.cuda()
, I got correct answer. All cpu and gpu test has passed. I really hope someone can help.