numenta / nupic.core-legacy

Implementation of core NuPIC algorithms in C++ (under construction)
http://numenta.org
GNU Affero General Public License v3.0
272 stars 278 forks source link

Change default dtypes to 32bit #346

Open scottpurdy opened 9 years ago

scottpurdy commented 9 years ago

Many locations use Real, Int, and UInt types which currently default to 64 bits. But the size can also be 32 bits and is determined by the preprocessor variables NTA_BIG_INTEGER and NTA_DOUBLE_PRECISION.

This has some problems:

I propose that we:

@subutai - can you provide some historical context?

CC @chetan51 @oxtopus

subutai commented 9 years ago

Hmm, not sure I remember all the details. I like the idea of using explicit precision as many places as possible. I don't know why someone might want unspecified versions around - we could make the default Int be explicitly 32 bit. Sometimes 32 bit operations are actually slower than 64 bit operations on a 64 bit platform, but that is really rare. In those cases we could explicitly specify 64 bit and maybe guard the loop with architecture #define's for PI support.

scottpurdy commented 9 years ago

@subutai - thanks for the info. I like what you suggest for using explicit precision everywhere unless we want to optimize for speed and don't care about the precision. Perhaps rather than putting the #define's throughout the code we just set the unspecified types to be platform-specific and use those where we don't care about precision and just want speed?

scottpurdy commented 9 years ago

@subutai - one example where it would be nice to have a type that matches the platform bitness is here: https://github.com/numenta/nupic.core/pull/343/files#diff-5fc4121d5c281de47186c9f30e425bdbR3782

The optimization wants to do arithmetic on a pointer so it needs a type that matches the platform.

subutai commented 9 years ago

OK, though wouldn't it be strange for the unspecified type to be platform specific? I guess that's how C is anyway.