t-vi / warp-ctc

PyTorch bindings for Warp-CTC
Apache License 2.0
42 stars 12 forks source link

Fix build against the latest pytorch #9

Open akhti opened 6 years ago

akhti commented 6 years ago

Cuda interface was changed in https://github.com/pytorch/pytorch/commit/1003ccfa15e944251a65ba2289f25e8f1ed14a46

t-vi commented 6 years ago

Thank you for the PR. If I'm not mistaken, the latest release 0.4.1 still has the old way. I would prefer if we could keep compatibility with that - if we have to, we could detect stream function it is in setup.py and pass a custom define. (I'm obviously biased, but on master you should really use PyTorch's own CTC.)

akhti commented 6 years ago

I see some diclscrepancy in quality between the implementation in the master and warp CTC. But it may be random noise. Still investigating.

Have you tried to use both for some task/benchmark?

On Aug 8, 2018 2:45 PM, "Thomas Viehmann" notifications@github.com wrote:

Thank you for the PR. If I'm not mistaken, the latest release 0.4.1 still has the old way. I would prefer if we could keep compatibility with that - if we have to, we could detect stream function it is in setup.py and pass a custom define. (I'm obviously biased, but on master you should really use PyTorch's own CTC.)

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/t-vi/warp-ctc/pull/9#issuecomment-411377771, or mute the thread https://github.com/notifications/unsubscribe-auth/AHH-m0ZnRa2ZehqqP40ip42ZTTuuC5o2ks5uOs9PgaJpZM4VxM8E .

t-vi commented 6 years ago

Re PyTorch's own CTC: I've not published anything for that yet. You do have a logsoftmax? The native GPU implementation makes a similar precision / speed tradeoff as cudnn, but it should really be OK. If you want to check on the accuracy for individual examples, you could also use doubles with the native implementation. The CPU version uses log-space-calculations throughout and can be considered to be even more accurate.

For this PR: I'm not going to manage to work on it during the next two weeks, but I would imagine that the setup.py tests whether torch has the new CUDAContext.h header and defines an ifdef OLD_STREAM_API or so if not to keep the old code.