Closed tnowotny closed 6 years ago
This looks fine in general, just two questions:
learning_blocksize
etc. mean?get_genn_preferences
function that returns a giant tuple. How about just passing the prefs
dictionary to the model template, and take the values directly from there? I can make the change if you want.And a minor remark: no need to set a validator
that checks the type being bool or int, if you don't specify a validator then the default validator will automatically check that the value has the same type as the default value.
Thanks for thorough review - you are right, it is safer to set a workable default blocksize even for kernels that are optional. If the kernels don't exist those values are ignored anyway. I removed the redundant validators. I am happy if you can simplify how the preferences are passed into the template!
I did this to run a benchmark with fixed block sizes controlled from the brian2genn script, overriding the usual GeNN blocksize optimization. This exposes preferences for optimising blocksize or not and the individual blocksizes for all GeNN CUDA kernels. Users can now set
devices.genn.optimise_blocksize= False
and any blocksize, e.g.devices.genn.neuron_blocksize= 128
to control block sizes individually. I think it does not hurt to make this available to everybody in the next release.