Open scottpurdy opened 9 years ago
Hmm, not sure I remember all the details. I like the idea of using explicit precision as many places as possible. I don't know why someone might want unspecified versions around - we could make the default Int be explicitly 32 bit. Sometimes 32 bit operations are actually slower than 64 bit operations on a 64 bit platform, but that is really rare. In those cases we could explicitly specify 64 bit and maybe guard the loop with architecture #define's for PI support.
@subutai - thanks for the info. I like what you suggest for using explicit precision everywhere unless we want to optimize for speed and don't care about the precision. Perhaps rather than putting the #define's throughout the code we just set the unspecified types to be platform-specific and use those where we don't care about precision and just want speed?
@subutai - one example where it would be nice to have a type that matches the platform bitness is here: https://github.com/numenta/nupic.core/pull/343/files#diff-5fc4121d5c281de47186c9f30e425bdbR3782
The optimization wants to do arithmetic on a pointer so it needs a type that matches the platform.
OK, though wouldn't it be strange for the unspecified type to be platform specific? I guess that's how C is anyway.
Many locations use
Real
,Int
, andUInt
types which currently default to 64 bits. But the size can also be 32 bits and is determined by the preprocessor variablesNTA_BIG_INTEGER
andNTA_DOUBLE_PRECISION
.This has some problems:
I propose that we:
@subutai - can you provide some historical context?
CC @chetan51 @oxtopus