TENNLab-UTK / fpga

FPGA neuromorphic elements, networks, processors, tooling, and software interfaces.
Mozilla Public License 2.0
2 stars 0 forks source link

"Binary" or "Fire" Source Modes #8

Open keegandent opened 2 months ago

keegandent commented 2 months ago

Problem

Currently, both Dispatch and Stream source modes require the processor to specify the charge applied to input neurons as part of the spike call. This resolution can consume a fair amount of input bandwidth and may not reflect the actual application's need, specifically for event-based or rate-encoded inputs.

Proposed Solution

It may be advantageous for a network to maintain high precision for internal charge but allow "binary" or "fire" overrides which inject input neurons with maximum charge or none at all.

Hurdles

Networks using this abbreviated input spiking method will need to be trained with spike encoders that operate in this binary fashion. In order to ensure there's no mismatch between user expectations and behavior, the fpga.Processor should check that apply_spike calls do not specify anything less than a full charge when this source method is active.

Expected API Impact

This will create a non-breaking API change in fpga.Processor.__init__ and will change the underlying behavior of fpga.Processor.apply_spikes for processors using this new input mode.

jimplank commented 2 months ago

Agreed -- I think this is a good idea -- I've been doing a ton of EONS runs with these encoders.

keegandent commented 2 months ago

@jimplank Are these networks "tagged" in any particular way to indicate their inputs expect full weight spikes? Or is there a particular pattern I can query for in the encoder?

I'm just trying to determine if I can somehow automate the selection of this input mode and potentially bake it into the network RTL to reduce LE usage (though I think the EDA optimizer might do this anyway).

jimplank commented 2 months ago

Not really — you just need to check the encoder — if it is "spikes", "spike", "rate" or "temporal", then the default spike will have a value of 1, which should translate to the max. You can set the "min" to something other than 1, and then the last spike gets scaled. So I'd check the encoder — if it is not of type "val", and if the "min" is not specified (or it is 1), then the spikes are binary.

On Jul 30, 2024, at 11:52 AM, Keegan Dent @.***> wrote:

You don't often get email from @.*** Learn why this is importanthttps://aka.ms/LearnAboutSenderIdentification

@jimplankhttps://github.com/jimplank Are these networks "tagged" in any particular way to indicate their inputs expect full weight spikes? Or is there a particular pattern I can query for in the encoder?

I'm just trying to determine if I can somehow automate the selection of this input mode and potentially bake it into the network RTL to reduce LE usage (though I think the EDA optimizer might do this anyway).

— Reply to this email directly, view it on GitHubhttps://github.com/TENNLab-UTK/fpga/issues/8#issuecomment-2258677637, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AGEAIRFWZ65WRV5OZP35FQ3ZO6ZDJAVCNFSM6AAAAABLU5Z6U2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJYGY3TONRTG4. You are receiving this because you were mentioned.Message ID: @.***>

keegandent commented 2 months ago

the default spike will have a value of 1, which should translate to the max. You can set the "min" to something other than 1

This actually brings up another question I have. These networks are all discrete, so (for normal sources, not the binary ones being proposed in this ticket) should a charge value of 1 passed to apply_spikes() be taken to mean an actual charge of 1 on the scale of min_weight to max_weight (like $[-128, 127]$ for a signed 8-bit integer)? That's the current behavior. Or, should a charge value of 1 be scaled up to max_weight? It seems a lot of the Framework examples revolve around floating-point charges in $[-1.0, 1.0]$. If the current API implementation isn't correct, I need to fix that first.

BGull00 commented 2 months ago

RAVENS in the framework has something called “spike value factor” (or something like that), which is a value that all input spike values are multiplied by for this very reason. This allows an input spike value of 1 to be scaled up to the max threshold any neuron can have, which should force a spike (though I believe inhibitory incoming synapses to input neurons can still change this behavior). All of this is off the top of my head though so take that with a grain of salt. Look at that processor parameter in the framework repo for more accurate information (or wait until Dr. Plank replies).On Jul 30, 2024, at 2:15 PM, Keegan Dent @.***> wrote:

the default spike will have a value of 1, which should translate to the max. You can set the "min" to something other than 1

This actually brings up another question I have. These networks are all discrete, so (for normal sources, not the binary ones being proposed in this ticket) should a charge value of 1 passed to apply_spikes() be taken to mean an actual charge of 1 on the scale of min_weight to max_weight (like $[-128, 127]$ for a signed 8-bit integer)? That's the current behavior. Or, should a charge value of 1 be scaled up to max_weight? It seems a lot of the Framework examples revolve around floating-point charges in $[-1.0, 1.0]$. If the current API implementation isn't correct, I need to fix that first.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

jimplank commented 2 months ago

Hi Keegan — How to interpret the charge from apply_spikes() is up to the processor. I thought 0-1 from apply_spikes() goes from 0 to max_weight on risp / ravens, but I'd have to check the simulator code to make sure. Can't do that right now, but I can later — just let me know — Jim

On Jul 30, 2024, at 2:15 PM, Keegan Dent @.***> wrote:

the default spike will have a value of 1, which should translate to the max. You can set the "min" to something other than 1

This actually brings up another question I have. These networks are all discrete, so (for normal sources, not the binary ones being proposed in this ticket) should a charge value of 1 passed to apply_spikes() be taken to mean an actual charge of 1 on the scale of min_weight to max_weight (like $[-128, 127]$ for a signed 8-bit integer)? That's the current behavior. Or, should a charge value of 1 be scaled up to max_weight? It seems a lot of the Framework examples revolve around floating-point charges in $[-1.0, 1.0]$. If the current API implementation isn't correct, I need to fix that first.

— Reply to this email directly, view it on GitHubhttps://github.com/TENNLab-UTK/fpga/issues/8#issuecomment-2258932088, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AGEAIREF2YSI7BW3P2J74DDZO7J2JAVCNFSM6AAAAABLU5Z6U2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJYHEZTEMBYHA. You are receiving this because you were mentioned.Message ID: @.***>

keegandent commented 2 months ago

@jimplank You are right according to the RISP README and some tests I did in Python with the risp module. I have opened #10 to be addressed next and then I will come back to this.