brian-team / brian2cuda

A brian2 extension to simulate spiking neural networks on GPUs
https://brian2cuda.readthedocs.io/
GNU General Public License v3.0
60 stars 12 forks source link

Support of `integer` and `long double` types for CUDA math functions #45

Closed denisalevi closed 7 years ago

denisalevi commented 7 years ago

The CUDA math library functions are only overloaded for single (float) and double precision (double) arguments. That means if we want to support integer arithmetics in device functions, we need to overload them ourselves. In C++ they are overloaded additionally for int and long double arguments.

Question is, do we create overloaded functions and add them all to CUDACodeGenerator.universal_support_code as is currently done with _brian_mod (here)?

Or would it be cleaner to do that in an extra header file that we add to brianlib? And instead of overloading it, we could also just template it like it is done for int_ in stdint_compat.h. We could then just cast everything to float, using the single precision math functions and have one specialisation for double, using the double precision math function, like this:

template<typename T> __host__ __device__ int function(T value)
{
    return cudaMathAPI_function((float)value);
}
template<> inline __host__ __device__ int function(double value)
{
    return cudaMathAPI_function(value);
}

And since in some cases there is some easier way to calculate a function for integer types, we could instead do the integer calculation and only specialize for float and double. And we could even add #ifdef __CUDA_ARCH ... to have different code for host or device function calls.

Another thing is, that long double arithmetics are not supported on CUDA at all. Question is, do we simply cast long double to double or let it fail instead. Right now the _brian_mod function just fails for long double arguments with something like

error: more than one instance of overloaded function "_brian_mod" matches the argument list: ...

And since CUDA doesn't have any exception handling, the user wouldn't know whats happening.

@mstimberg Are there actual use cases where people use long double data types? And is there an easy way to check for it and give a warning or error when using cuda_standalone? And how does genn do it? From what I could see, genn just uses fmod, so I suppose it only supports float and double?

denisalevi commented 7 years ago

The CUDA Math library supports all C99 standard float and double math functions. Therefore I now implemented for each of those functions a wrapper, that either promotes integral types to float when the function is called in device code or calls the normal C++ function (which is already overloaded for integral types, effectively casting the argument to double) when in host code. For any function func it looks like this

template <typename T>
__host__ __device__
float _brian_func(T value)
{
#if (defined(__CUDA_ARCH__) && (__CUDA_ARCH__ > 0))
    return funcf((float)value);
#else
    return func(value);
#endif
}
inline __host__ __device__
double _brian_func(double value)
{
    return func(value);
}

Tests for all functions are written as well (for both host and device code) and are passing.

And it turns out long double is always treated as double in device code anyways, so no need to take care of that

Done. Closing.

moritzaugustin commented 7 years ago

are integer types the only values that are used for the template parameter in instances of these code snippets?

additionally: why not cast int to (64bit) double as in normal C++ (assuming what you write is correct) but (32bit) float which is not accurate enough for (32bit) int values due to the limited size of the mantissa. i therefore suggest to use double

finally: for integer types with more than 32bit, e.g. 64bit even (64bit) double might be too small. therefore long int occurences should give rise to a warning that these values might be casted with loss of precision

denisalevi commented 7 years ago

are integer types the only values that are used for the template parameter in instances of these code snippets?

These snippets are used whenever one of the default functions is used in any string expression. This example covers some possible use cases at the moment:

G = NeuronGroup(3, 'dv/dt = sin(v)')  # will use the double version of sin, since v is double by default
G.v[0] = 'sin(i)'  # i is int32_t type
G.v[1] = 'sin(1)'  # 1 is int type
G.v[2] = 'sin(1.0)'  # 1.0 is double type

additionally: why not cast int to (64bit) double as in normal C++ (assuming what you write is correct) but (32bit) float which is not accurate enough for (32bit) int values due to the limited size of the mantissa. i therefore suggest to use double

Yeah right, I didn't think of the fact, that we can loose information when promoting integral to floating point types. I used float because of the faster single precision arithmetic on the GPU. But I think in many use cases, we won't loose information when casting integral types to float (the integer would have to be >2^23 = 8388608 with 23 bit mantissa of 32 bit floats). So maybe we should cast to double by default and add an extra preference when we add support for single precision data types (#37)?

finally: for integer types with more than 32bit, e.g. 64bit even (64bit) double might be too small. therefore long int occurences should give rise to a warning that these values might be casted with loss of precision

As far as I can tell this case can only happen if the user specifically specifies a numpy.int64 or numpy.uint64 data type in the model. So we could create a warning with the Brian2 logging system every time a user specifies a 64bit int type (easy to do). If we want a warning only if in64 types are used inside default function calls, I would have to dig a little to see where to check for that. Or another option would be to create template specialisation for the 64bit integer types in the code snippets and create compiler warnings there using #warning and #pragma message. I guess I'd prefer the first option.

So what about

template <typename T>
__host__ __device__
double _brian_func(T value)
{
#if (defined(__CUDA_ARCH__) && (__CUDA_ARCH__ > 0))
    return func((double)value);
#else
    return func(value);
#endif
}
inline __host__ __device__
float _brian_func(float value)
{
    return funcf(value);
}

and later we can just switch the double and float specifiers, depending on some user preference (e.g. something like prefs.devices.cuda_standalone.integer_promotion_to = 'float'/'double'). And once we have the option for single precision floating points (#37), these would then always use the faster single precision arithemics.

denisalevi commented 7 years ago

I added a preference

prefs.codegen.generators.cuda.default_functions_integral_convertion

The default is 'double_precision', converting integral types to double (as shown in the code snippet in my last comment). Setting it to 'single_precision' exchanges the float and double keywords in the functions, converting integral types to float.

I have also added warnings. Just warning whenever a 64bit integer type is used doesn't work, since Clock class uses them always. So instead the cuda_generator checks for the occurrence of one the relevant default functions and an integer type in the same generated code line and gives a warning, using the brian2 logging system. We can't parse only the arguments of the function with regex, because of possibly nested paranthesis (e.g. _brian_sin((1+i)*5) and using another parsing package just for this seems unnecessary. So we also warn in the case of a line v = i * _brian_sin(1); if i is int64_t, but I'd say that's fine.

We warn in two cases:

  1. int64 types
  2. int32 types when prefs.codegen.generators.cuda.default_functions_integral_convertion == 'single_precision'

I also added tests for setting the preference and the correct warnings. So, trying to close this again :)