Closed Zauze closed 1 year ago
You are probably right about the error diagnosis. Leaky relu is skipped because it is not compatible with the current interpretation of rate-encoded spiking neurons. A negative activation would require neurons to fire at a negative rate. You can work around this issue by replacing the leaky relu with a normal relu, which will likely require retraining. Or come up with a neuron encoding for negative activation values.
Hi guys, I am currently trying to convert the tiny-Yolov3 to an SNN (without simulation etc.) but I am running into the key error equally to #82 .
My converted architecture looks like this:
The error occurs when handling
max_pooling2d_1
inside theparse
function ofsnntoolbox/parsing/utils.py
=> I suppose, because the previous layerleaky_re_lu_1
is skipped and not added to thename_map
variable and then theself.get_inbound_names
function is called with the pooling layer. Its inbound layers are only the relu and that is not inside thename_map
. The only layer that is inside thename_map
dict is theconv2d_1
.I am using the development version of the toolbox (0.6.0) inside a conda environment (but installed with pip activated environment).
tensorflow version 2.9.1
Here the full stack trace:
Config file is attached as txt below.
config.txt