Closed Gavinator98 closed 1 year ago
Thanks for taking a look over the code! I should be able to make edits next week.
I'm conflicted on whether the LCA neuron and Accumulator neuron should be their own processes or LIF processes. Initially I had them as LIF processes, but I implemented them in microcode as well which led me towards the current design. My main concern is I don't think process models should differ in behavior for the same process.
The Accumulator neuron is the most reasonable to be a LIF process, it just would be a non-leaky, graded TernaryLIF. None of the lava LIF process models support graded spikes right now so it would be a separate process model.
The LCA neuron is graded with a soft-threshold activation, doesn't reset the voltage, and has a self-reinforcement term for the two-layer model. So while it has similar parameters to a LIF neuron, the behavior is quite different.
In both cases there would be new process models with a "LCA" tag which would need to be used in the run configuration. Also, if the models inherit most of the dynamics from an existing fixed and floating point LIF, this likely requires the separate subprocess models for since the fixed and floating point models define du differently: (4095 vs 1 - a perfect example of what I want to avoid with having everything as a LIF).
As a reference, here's the version with LIF models (I used exception_proc_model_map
instead of tags to select these in the run config):
# Copyright (C) 2023 Battelle Memorial Institute
# SPDX-License-Identifier: BSD-2-Clause
# See: https://spdx.org/licenses/
import numpy as np
from lava.magma.core.model.sub.model import AbstractSubProcessModel
from lava.magma.core.model.py.ports import PyOutPort
from lava.magma.core.process.process import AbstractProcess
from lava.magma.core.model.py.type import LavaPyType
from lava.magma.core.resources import CPU
from lava.magma.core.sync.protocols.loihi_protocol import LoihiProtocol
from lava.magma.core.decorator import implements, requires, tag
from lava.proc.lif.process import TernaryLIF, LIF
from lava.proc.dense.process import Dense
from lava.proc.lif.models import AbstractPyLifModelFloat, PyLifModelBitAcc, PyTernLifModelFixed
from lca.processes import LCA2Layer
@implements(proc=LIF, protocol=LoihiProtocol)
@requires(CPU)
@tag('floating_pt')
class PySoftThresholdFloat(AbstractPyLifModelFloat):
# This model might spike too frequently. Implement an accumulator if so.
s_out: PyOutPort = LavaPyType(PyOutPort.VEC_DENSE, float)
vth: float = LavaPyType(float, float)
def subthr_dynamics(self, activation_in: np.ndarray):
# Its easier to add the self-reinforcement term here than to have a recurrent connection.
super().subthr_dynamics(activation_in + self.spiking_activation() * self.dv)
def reset_voltage(self, spike_vector: np.ndarray):
# Don't reset voltage
return
def spiking_activation(self):
return np.sign(self.v) * np.maximum(np.abs(self.v) - self.vth, 0)
@implements(proc=LIF, protocol=LoihiProtocol)
@requires(CPU)
@tag('fixed_pt')
class PySoftThresholdFixed(PyLifModelBitAcc):
def subthr_dynamics(self, activation_in: np.ndarray):
prev_spike_vector = self.spiking_activation()
super().subthr_dynamics(activation_in)
decay_const_v = self.dv + self.dm_offset
neg_voltage_limit = -np.int32(self.max_uv_val) + 1
pos_voltage_limit = np.int32(self.max_uv_val) - 1
spike_feedback = np.int64(prev_spike_vector) * decay_const_v
spike_feedback = np.sign(spike_feedback) * np.right_shift(np.abs(spike_feedback), self.decay_shift)
spike_feedback = np.int32(spike_feedback)
self.v[:] = np.clip(self.v + spike_feedback, neg_voltage_limit, pos_voltage_limit)
def reset_voltage(self, spike_vector: np.ndarray):
# Dont reset voltage
return
def spiking_activation(self):
return np.sign(self.v) * np.maximum(np.abs(self.v) - self.effective_vth, 0)
@implements(proc=TernaryLIF, protocol=LoihiProtocol)
@requires(CPU)
@tag('floating_pt')
class PyAccumulatorFloat(AbstractPyLifModelFloat):
# The graded spike LIF neuron was removed in Lava 0.4 so this reimplements the functionality
s_out: PyOutPort = LavaPyType(PyOutPort.VEC_DENSE, float)
vth_hi: float = LavaPyType(float, float)
vth_lo: float = LavaPyType(float, float)
def reset_voltage(self, spike_vector: np.ndarray):
self.v[spike_vector != 0] = 0
def spiking_activation(self):
return self.v * np.logical_or(self.v > self.vth_hi, self.v < self.vth_lo)
@implements(proc=TernaryLIF, protocol=LoihiProtocol)
@requires(CPU)
@tag('fixed_pt')
class PyAccumulatorFixed(PyTernLifModelFixed):
def spiking_activation(self):
return self.v * np.logical_or(self.v > self.vth_hi, self.v < self.vth_lo)
@implements(proc=LCA2Layer, protocol=LoihiProtocol)
@tag('floating_pt')
class LCA2LayerModelFloat(AbstractSubProcessModel):
def __init__(self, proc: AbstractProcess):
threshold = proc.threshold.get()
dt = proc.dt.get()
tau_rc = proc.tau_rc.get()
T = dt / tau_rc
weights = proc.weights.get()
input_val = proc.input.get()
self.v1 = LIF(shape=(weights.shape[0],), du=1, vth=threshold, dv=T, v=0, u=0, bias_mant=0)
self.weights_T = Dense(weights=-weights.T, num_message_bits=24)
# LIF in place of Accumulator
self.res = TernaryLIF(shape=(weights.shape[1],), du=1, dv=0, vth_hi=0, vth_lo=0, bias_mant=input_val)
self.weights = Dense(weights=(weights * T), num_message_bits=24)
self.weights.a_out.connect(self.v1.a_in)
self.res.s_out.connect(self.weights.s_in)
self.weights_T.a_out.connect(self.res.a_in)
self.v1.s_out.connect(self.weights_T.s_in)
# Expose output and voltage
self.v1.s_out.connect(proc.out_ports.v1)
self.res.s_out.connect(proc.out_ports.res)
proc.vars.voltage.alias(self.v1.vars.v)
@implements(proc=LCA2Layer, protocol=LoihiProtocol)
@tag('fixed_pt')
class LCA2LayerModelFixed(AbstractSubProcessModel):
def __init__(self, proc: AbstractProcess):
threshold = proc.threshold.get()
dt = proc.dt.get()
tau_rc = proc.tau_rc.get()
T = dt / tau_rc
T_int = int(T * 4096)
weights = proc.weights.get() * 2**8
input_val = proc.input.get()
input_exp = proc.input_exp.get()
self.v1 = LIF(shape=(weights.shape[0],), du=4095, dv=T_int, vth=threshold, v=0, u=0, bias_mant=0)
# weight_exp shifted 8 bits for the weights, 6 for the v1 output.
self.weights_T = Dense(weights=-weights.T, num_message_bits=24, weight_exp=-14)
# LIF in place of Accumulator
self.res = TernaryLIF(shape=(weights.shape[1],), du=4095, dv=0, vth_hi=0, vth_lo=0,
bias_mant=input_val, bias_exp=input_exp)
self.weights = Dense(weights=(weights * T), num_message_bits=24, weight_exp=-14)
self.weights.a_out.connect(self.v1.a_in)
self.res.s_out.connect(self.weights.s_in)
self.weights_T.a_out.connect(self.res.a_in)
self.v1.s_out.connect(self.weights_T.s_in)
# Expose output and voltage
self.v1.s_out.connect(proc.out_ports.v1)
self.res.s_out.connect(proc.out_ports.res)
proc.vars.voltage.alias(self.v1.vars.v)
Hello,
I have been attempting to get this pull request working on my end, and am fairly certain I have exhausted most options so decided to reach out. When attempting to run test_fixed.py, I am receiving import errors on lava.lib.optimization.process. However, when I attempt to import just lava.lib.optimization, I have no errors. I ended up grabbing this PR using Github CLI and am certain that I am working with the same files that are most up-to-date. Would love some advice on how to get the test_fixed.py to execute.
Thanks in advance.
Hello,
I have been attempting to get this pull request working on my end, and am fairly certain I have exhausted most options so decided to reach out. When attempting to run test_fixed.py, I am receiving import errors on lava.lib.optimization.process. However, when I attempt to import just lava.lib.optimization, I have no errors. I ended up grabbing this PR using Github CLI and am certain that I am working with the same files that are most up-to-date. Would love some advice on how to get the test_fixed.py to execute.
Thanks in advance.
Unfortunately I can't reproduce the issue on Mac OS when cloning a fresh copy and installing as described in README.md. You could try manually adding the /src directory to your PYTHONPATH environment variable to see if that helps.
I want to confirm that my setup procedure is correct:
Would this be the appropriate set of steps in order to get this PR up and running?
I don't know the exact details of the lava-nc setup but it might not be installing lava-optimization in a way that is updated when you checkout the pr. The easiest option is to install directly from the lava-optimization fork (this will automatically install lava-nc as a requirement):
git clone https://github.com/Gavinator98/lava-optimization.git
cd lava-optimization
poetry config virtualenvs.in-project true
poetry install
source .venv/bin/activate
python ./tests/lava/lib/optimization/solvers/lca/test_fixed_pt.py
Additionally there is now a tutorial lava-optimization/tutorials/tutorial_04_lca.ipynb
which may be a better starting point than the unit tests.
Issue Number:
Objective of pull request: Add 1-Layer and 2-Layer LCA Implementations for CPU Backend
Pull request checklist
Your PR fulfills the following requirements:
pyb
) passes locallypyb -E unit
) or (python -m unittest
) passes locallyPull request type
Please check your PR type: - [ ] Bugfix - [X] Feature - [ ] Code style update (formatting, renaming) - [ ] Refactoring (no functional changes, no api changes) - [ ] Build related changes - [ ] Documentation changes - [ ] Other (please describe): ## What is the new behavior?Does this introduce a breaking change?
Supplemental information