Describe the bug
I was observing the synapse weight values stored in a netx model I had exported and they seemed scaled up rather than down. See example:
To reproduce current behavior
Steps to reproduce the behavior:
When I run this code (add code or minimum test case) ...
Original Weights: [-1 1]
Optimized Weights: [-128 128]
Number of Weight Bits: 2
Weight Exponent: -7
Sign Mode: 1
**Expected behavior**
- optimized_weights should match the original weights I believe
- weight_exponent should be zero.
- The rest are okay
From what I understand this function should be trying to compress the weights as much as possible. Then we scale the currents by `2**weight_exponent`.
In short:
```np.all(weight == optimized_weights * 2**weight_exponent)```
Please tell me if this is not the case and will close the issue.
Additional Potential Failure Example:
Original Weights: [-1 -2 -3 -4]
Optimized Weights: [ -64 -128 -192 -256]
Number of Weight Bits: 2
Weight Exponent: -6
Sign Mode: 3
**Environment (please complete the following information):**
- Device: Local
- OS: Windows
- Lava version : main branch
Describe the bug I was observing the synapse weight values stored in a netx model I had exported and they seemed scaled up rather than down. See example:
To reproduce current behavior Steps to reproduce the behavior:
Original Weights: [-1 1] Optimized Weights: [-128 128] Number of Weight Bits: 2 Weight Exponent: -7 Sign Mode: 1