neuromorphs / NIR

Neuromorphic Intermediate Representation reference implementation
https://neuroir.org/docs
BSD 3-Clause "New" or "Revised" License
67 stars 9 forks source link

Neuron parameters shift in nir_to_lava script #111

Open orihane-psee opened 1 month ago

orihane-psee commented 1 month ago

Hello,

I am currently using the nir_to_lava.py script to deploy and test my snnTorch network on Loihi hardware using NIR graphs. So far, I have run the test with Loihi2SimCfg as it was done in lava exemple : https://neuroir.org/docs/examples/lava/nir-conversion.html#nir-to-lava-dl, and it worked fine. However, since I am trying to deploy the model on the actual hardware, I need to use the 'fixed_pt' configuration. I noticed some parameters has been shifted during the conversion, such as the LIF's threshold, current/voltage decays, etc . Could you highlight a little bit about these choices? How can I make sure if the converted model parameters are coherent ?

Thank you for your attention.

Best regards,

RIHANE Ossama

Computer Vision Intern at PROPHESEE

stevenabreu7 commented 1 month ago

Hi, thanks for looking into this. Can you show us what shift in parameter values you observe? Then we can look into it together. Example code and output would be very helpful, thanks!

orihane-psee commented 1 month ago

Hello,

Thanks for your response. In the _nir_tolava.py script (https://github.com/neuromorphs/NIR/blob/main/paper/nir_to_lava.py), there is a function that converts NIR nodes into Lava structures:

def _nir_node_to_lava(node: nir.NIRNode, import_config: ImportConfig):
    """Convert a NIR node to a Lava node. May return a list of two Lava nodes, but only
    in the case of a LIF node, which is preceded by a Dense node."""

    if isinstance(node, nir.LIF):
        dt = import_config.dt
        # voltage leak: dv = dt / tau
        tau_mem = node.tau
        dv = dt / tau_mem
        vthr = node.v_threshold  # * 10
        # no current leak
        tau_syn = None  # 1/200
        du = 1.0  # no current leak
        # correction for input weights
        correction = dt / node.tau
        w = np.ones((1, 1))
        w *= correction

        if import_config.fixed_pt:
            dv = int(dv * 4095)
            du = int(du * 4095)
            vthr = int(vthr * 131071) >> 9
            w = (w * 256).astype(np.int32)

        lif = LIF(
            shape=(1,), # u=0., # v=0.,
            du=du,
            dv=dv,
            vth=vthr,
            bias_mant=0, bias_exp=0,  # no bias
            name='lif'
        )
        dense = Dense(weights=w)
        dense.a_out.connect(lif.a_in)
        return [dense, lif]

For example, the nir.LIF object is converted into a list of Lava Dense and LIF objects. However, when using the _fixedpt configuration, there is a shift in the LIF parameters (dv = int(dv 4095), vthr = int(vthr 131071) >> 9 ...). I am trying to understand the logic behind this, specifically the precision of the model (the number of bits in which each parameter is encoded).

These are two graphs of different configs when deploying my network:

_floatingpt (which is correct) image

_fixedpt (u and v values are not as expected) Screenshot from 2024-07-29 11-23-20

The black line in membrane potential graphs stands for the threshold.

stevenabreu7 commented 1 month ago

Hmm there seems to be some scaling issue. Have you tried to use the lava-dl configuration instead, and if so does the issue persist there as well?

The parameter values were rescaled to adapt to the Lava neuron models in fixed precision, see lava-nc/lava @ src/lava/proc/lif/models.py#L240 for details about the precision of different parameters. It might be that there is an exponent mismatch somewhere, but it's strange that this hasn't given us issues in our LIF conversion tests..

If you have some time, would you be willing to write up a test case with your neuron parameters? Then we can make sure that our existing tests pass, as well as yours.