Open denisalevi opened 7 years ago
Just saw that brian2 has an issue for clip in cpp standalone as well (brian-team/brian2#782). Just referencing it here since probably the only difference btwn cpp and cuda implementation will be the __host__ __device__
flags I suppose.
Brian2 templates just the first arguments (value
) and casts the other two (a_min
and a_max
) to double
(merged in brian-team/brian2#810).
Makes sense, but we might want an option to cast to float when having #37 implemented, which then is not type save for large integers. If we still want it, it should produce a warning and can probably be implemented similar as done for #45.
Either way, we should update this when we next update brian2, which added tests for the clip function to return the same type as the value
argument.
now: only single precision others move to wishlist
I implemented a quick fix for single precision: Depending on precision mode (single/double), all argument of clip
are now float
or double
clip(float value, float min, float max){}
clip(double value, double min, double max){}
This means it is not type safe to pass int32
values in single precision mode. But it is not relevant for our benchmarks (we only pass 0
as integer literal in STDP).
brian2 has templated the value
parameter and castst the other to double
. brian2genn just used only float
, which is not typesafe.
Our implementation of the
clip
function currently casts all arguments todouble
. When implementing #37, we should also add at least an option for casting tofloat
. Maybe even overload the function for different types as done with the modulo function (while taking care of type safety when comparing different data types).