TENNLab-UTK / fpga

FPGA neuromorphic elements, networks, processors, tooling, and software interfaces.
Mozilla Public License 2.0
1 stars 0 forks source link

Unimplemented RISP parameters #19

Closed jimplank closed 3 weeks ago

jimplank commented 3 weeks ago

Probably not a bad idea to enumerate the RISP parameters that will be unimplemented. Here's my thought -- please let me know comments:

  1. discrete = false. No floating point anywhere.
  2. leak - I assume both leak modes are implemented and configurable on a neuron-by-neuron basis.
  3. fire_like_ravens = true. You tell me. I don't think it is important for this one to be supported. The whole intent is to be able to use the faster RISP simulator to train networks that work on RAVENS.
  4. specific_weights = true. In case you haven't read up on this one, it's a bit of a mess. This allows you to specify a weights vector to constrain the weights to specific values. The weights may be floating point, which is why I don't think this should be a FPGA parameter. What makes this a mess is that when you use weights, you specify discrete to be true, and the weight values stored in the neurons are numbers from 0 to weights.size()-1. The simulator uses the weight value as an index into the weights array. The reason discrete is true is so that EONS can constrain its setting of the weights to integers. I'm not sure what it does with the thresholds, ha ha, so I need to work on this (it was a quick add for our ARL project). I don't think it makes sense for this to be a FPGA-supported parameter unless we think it could be a useful feature when weights are integers.
  5. noisy_weights. Ditto
  6. noisy_seed, weights, stds. Also ditto.

That's all I see -- Jim

keegandent commented 3 weeks ago
  1. Implementing floating point in hardware I don't think will ever be on the roadmap, but I wouldn't be against attempting fixed-point approximation at a future point. Definitely not a priority right now.
  2. All three leak modes should be functional as the repo stands.
  3. I actually think this might be one of the easier ones to implement; I just haven't taken the time to do so yet. My guess is I need a one-cycle delay register on fire, but I need to double-check.
  4. So in the FPGA implementation as it exists, synapse weight will always need to be an integer value because of the separation of responsibility between the neuron RTL module and the synapse RTL module. However, I can imagine a scenario where we integrate pre-synapses as a component of the neurons in which case we could maybe support non-integer weight additions that resolve to integer charges, but that speculation could be putting a lot of faith in the synthesizer to compute all arithmetic permutations of incoming fires... Regarding weights arrays and indexing, that should be easily achievable in hardware because that's essentially a LUT or constant memory block.
  5. I'm actually very interested in the concept of pseudo-randomness on the FPGA, and we might even be able to replicate simulator PRNGs. Implementing a static network on an FPGA is cool, but some form of dynamic behavior could really make this intriguing. A CLR call could even include a seed instead of wasting bits.
  6. Ditto