Open ipanchenko opened 9 years ago
I wasn't planning on adding it... mostly because I'm mostly targeting neural nets, and latest trends are to go to lower precision, ie fp16. Do you mind if I ask your use-case? Doesnt mean I will suddenly jump up and do it, but would be good to at least understand what you are trying to achieve.
I would be in favour of just making the precision selectable, including both 64 bit doubles and 16 bit halfs. As someone relatively new to neural networks and Torch, I like to experiment to see what the differences are. I'm very scientific in how I approach this, so I would just like to run my network at 16 bit precision, 32 bit precision and 64 bit precision to see what the differences are. I don't like to just take people at their word that smaller is better, I want to see it for myself. It helps me to learn and understand.
@genixpro
Ok, sounds good. The underlying cutorch codebase, ie cutorch, already provides different precisions actually, so you could plausibly use a similar technique perhaps. https://github.com/torch/cutorch/blob/master/lib/THC/THCGenerateAllTypes.h
Is it possible to use Double precision floating point on GPU? As I understand, it is impossible in cutorch as CudaTensor is a single precision floating point tensor. And that about ClTorch? Don't you plan to add this?