brian-team / brian2genn

Brian 2 frontend to the GeNN simulator
http://brian2genn.readthedocs.io/
GNU General Public License v2.0
46 stars 16 forks source link

I exposed the blocksize related preferences in GeNN to Brian2Genn users. #73

Closed tnowotny closed 6 years ago

tnowotny commented 6 years ago

I did this to run a benchmark with fixed block sizes controlled from the brian2genn script, overriding the usual GeNN blocksize optimization. This exposes preferences for optimising blocksize or not and the individual blocksizes for all GeNN CUDA kernels. Users can now set devices.genn.optimise_blocksize= False and any blocksize, e.g. devices.genn.neuron_blocksize= 128 to control block sizes individually. I think it does not hurt to make this available to everybody in the next release.

mstimberg commented 6 years ago

This looks fine in general, just two questions:

  1. What does the default value of 0 for learning_blocksize etc. mean?
  2. I think that it gets a bit hard to read with the get_genn_preferences function that returns a giant tuple. How about just passing the prefs dictionary to the model template, and take the values directly from there? I can make the change if you want.

And a minor remark: no need to set a validator that checks the type being bool or int, if you don't specify a validator then the default validator will automatically check that the value has the same type as the default value.

tnowotny commented 6 years ago

Thanks for thorough review - you are right, it is safer to set a workable default blocksize even for kernels that are optional. If the kernels don't exist those values are ignored anyway. I removed the redundant validators. I am happy if you can simplify how the preferences are passed into the template!