torch / cunn

Other
215 stars 174 forks source link

nn.Max does not work under FP16 mode #432

Closed shuzi closed 7 years ago

shuzi commented 7 years ago

image

gchanan commented 7 years ago

This looks like an issue with nn, not cunn. I should be able to fix easily.

fmassa commented 7 years ago

@shuzi at least in your example, you forgot to convert the module to half type (using :type ('torch.CudaHalfTensor')

gchanan commented 7 years ago

@fmassa is correct of course, but even if you convert the type nn.Max will still fail.

There's a number of these issues where the Module checks for 'CudaTensor' type to figure out if it's a cuda module rather than any cuda tensor type. Shouldn't be too hard to fix; I would just like a nice way of testing.

gchanan commented 7 years ago

https://github.com/torch/nn/pull/1124 should fix this.

shuzi commented 7 years ago

@gchanan The latest install still has the following error.

image

gchanan commented 7 years ago

@shuzi did you update nn? Here's results from updated nn and cunn:

th> nn.Max(1):type('torch.CudaHalfTensor'):forward(torch.CudaHalfTensor(10)) 0 [torch.CudaHalfTensor of size 1]

shuzi commented 7 years ago

@gchanan

sorry it works now, my bad.