hughperkins / cltorch

An OpenCL backend for torch.
Other
289 stars 26 forks source link

Double precision #17

Open ipanchenko opened 9 years ago

ipanchenko commented 9 years ago

Is it possible to use Double precision floating point on GPU? As I understand, it is impossible in cutorch as CudaTensor is a single precision floating point tensor. And that about ClTorch? Don't you plan to add this?

hughperkins commented 9 years ago

I wasn't planning on adding it... mostly because I'm mostly targeting neural nets, and latest trends are to go to lower precision, ie fp16. Do you mind if I ask your use-case? Doesnt mean I will suddenly jump up and do it, but would be good to at least understand what you are trying to achieve.

genixpro commented 8 years ago

I would be in favour of just making the precision selectable, including both 64 bit doubles and 16 bit halfs. As someone relatively new to neural networks and Torch, I like to experiment to see what the differences are. I'm very scientific in how I approach this, so I would just like to run my network at 16 bit precision, 32 bit precision and 64 bit precision to see what the differences are. I don't like to just take people at their word that smaller is better, I want to see it for myself. It helps me to learn and understand.

hughperkins commented 8 years ago

@genixpro

Ok, sounds good. The underlying cutorch codebase, ie cutorch, already provides different precisions actually, so you could plausibly use a similar technique perhaps. https://github.com/torch/cutorch/blob/master/lib/THC/THCGenerateAllTypes.h