synsense / rockpool

A machine learning library for spiking neural networks. Supports training with both torch and jax pipelines, and deployment to neuromorphic hardware.
https://rockpool.ai
GNU Affero General Public License v3.0
49 stars 13 forks source link

[DYNAP-SE2] Output layer's neurons seem to be missing in the generated hardware configuration #10

Open MarcoBramini opened 1 year ago

MarcoBramini commented 1 year ago

I'm opening this issue because i couldn't find a way to identify the output neurons, after hardware deployment, on DYNAP-SE2:


n_input_channels = 12
n_population = 32
n_output_channels = 2

net = Sequential(
    LinearTorch((n_input_channels, n_population)), # 12 Input neurons
    LIFTorch(n_population, **neuron_parameters), # 32 Neurons
    LinearTorch((n_population, n_population)),
    LIFTorch(n_population,has_rec=True, **neuron_parameters), #32 Neurons
    LinearTorch((n_population, n_output_channels)),
    LIFTorch(n_output_channels, **neuron_parameters), # 2 Output Neurons
) # Tot neurons: 12+32+32+2 = 78

net_graph = net.as_graph()
spec = mapper(net_graph)

spec["Iscale"] *= 10

spec.update(autoencoder_quantization(**spec))
config = config_from_specification(**spec)

print(spec['n_neuron'] ) # Correctly prints 66 neurons (not considering the 12 input neurons)

# Print all synapses tags for every allocated neuron
tag = []
for core in config["config"].chips[0].cores:
    for neuron in core.neurons:
        for synapse in neuron.synapses:
            tag.append(synapse.tag)
print(np.unique(tag))
# Prints the tags of 76 neurons (but they should be 78): 
# [ 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
# 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
# 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
# 72 73 74 75]

# Print all destinations tags for every allocated neuron
tag = []
for core in config["config"].chips[0].cores:
    for neuron in core.neurons:
        for destination in neuron.destinations:
            if destination.x_hop != -7 and destination.tag != 0:
                tag.append(destination.tag)
print(np.unique(tag))
# Prints the tags of 64 neurons (but they should be 66):
# [12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
# 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59
# 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75]
ugurcancakal commented 1 year ago

@MarcoBramini thanks for pointing out this.

First of all, tags do not necessarily need to be equal to the number of neurons. That's because they are used only if there is a connection. In short, You can see that the last two columns of spec['weights_rec'] are fully zero. In that case, no CAM will be allocated for the last two neurons.


Let's put a breakpoint in config_from_specification line 186 sram = allocator.SRAM_content(. There you can access to allocator object.

You can see that

allocator.n_in = 12 allocator.n_neuron = 66

that's the same as the spec dictionary.

There if you call allocator.tag_selector(), it will return the tags allocated for input connections, recurrent connections, and output connections.

For input connections you'll see that [0..11] are allocated, and for recurrent connections [12..77] are allocated.

But the issue is that the last two tags 76 and 77 are not being used by any connection.

In allocator line 201 content_rec = self.matrix_to_synapse( you can see that self.weights_rec is being used as a reference matrix to create the CAM content. Since the last two rows of self.weights_rec consist of only 0s, no CAM will be allocated!

You can also see that in spec['weights_rec'] The last two columns of it are fully zero.

The reason of that is the autoencoder_quantization(**spec) step. There the output weights could not survive and they are unfortunately pruned.

ugurcancakal commented 1 year ago

You can force quantized 'weights_rec' to have some connections just like this

...
spec.update(autoencoder_quantization(**spec))

spec['weights_rec'][0][-1][0] = 1
spec['weights_rec'][0][-2][0] = 1

config = config_from_specification(**spec)
...

In that case, you'll see that 77 tags will be used.

DylanMuir commented 1 year ago

@ugurcancakal in Marco's original case, how can he identify the hardware neuron IDs of the output neurons? Can he use the mapped graph, for example?