Hi I met the cuda out of memory using ctc as loss
pytorch=0.4.1 python=3.6, and I got 8 gtx titanx
Like mentioned in #118 and just the same as #76 by mankeyboy
My input of ctc is like 4, 128, 10, as 128 is the batch size, and 128 labels for every 4 sequence of 10 alphabets.
The label_sizes and probs_sizes is defined as:
Even though I reduce the batch size to 4 the error still occurs
As mentioned in #118 I know the label_sizes or the probs_sizes maybe incorrect as I don`t fully understand what is described in the readme
result = torch._C._safe_call(*args, **kwargs)
torch.FatalError: std::bad_alloc
However the provided example in readme work fine for me....
In order to make sure I`m understanding the params correctly, I modified the sample code a little like:
Hi I met the cuda out of memory using ctc as loss pytorch=0.4.1 python=3.6, and I got 8 gtx titanx
Like mentioned in #118 and just the same as #76 by mankeyboy
My input of ctc is like 4, 128, 10, as 128 is the batch size, and 128 labels for every 4 sequence of 10 alphabets. The label_sizes and probs_sizes is defined as:
Even though I reduce the batch size to 4 the error still occurs As mentioned in #118 I know the label_sizes or the probs_sizes maybe incorrect as I don`t fully understand what is described in the readme
So I tried the code below:
No memory error but:
However the provided example in readme work fine for me.... In order to make sure I`m understanding the params correctly, I modified the sample code a little like:
It also works fine! What could be wrong ? Am I misunderstanding the meaning of label_sizes and probs_sizes ?
Here is the simplified output log: