arvoelke / nengolib

Nengo library of additional extensions
Other
29 stars 6 forks source link

Instability for LegendreDelay when theta and order is large? #189

Open tcstewar opened 3 years ago

tcstewar commented 3 years ago

When I try a LegendreDelay(theta=0.3, order=10) with a 1 Hz sine input, I get the orange line below :

image

With a larger theta or higher order, it goes horribly unstable and heads off to infinity.

image image

However, if I try implementing the LegendreDelay myself, I get the blue line, which seems to work perfectly well. Here's my implementation:

import nengo
import numpy as np
import matplotlib.pyplot as plt

import scipy.linalg
from scipy.special import legendre
class LegendreDelay(nengo.synapses.Synapse):
    def __init__(self, theta, q):
        self.q = q              # number of internal state dimensions per input
        self.theta = theta      # size of time window (in seconds)

        # Do Aaron's math to generate the matrices
        #  https://github.com/arvoelke/nengolib/blob/master/nengolib/synapses/analog.py#L536
        A = np.zeros((q, q))
        B = np.zeros((q, 1))
        for i in range(q):
            B[i] = (-1.)**i * (2*i+1)
            for j in range(q):
                A[i,j] = (2*i+1)*(-1 if i<j else (-1.)**(i-j+1)) 
        self.A = A / theta
        self.B = B / theta        

        super().__init__(default_size_in=1, default_size_out=1)

    def make_step(self, shape_in, shape_out, dt, rng, state=None):
        state = np.zeros((self.q, shape_in[0]))
        w = self.get_weights_for_delays(1)

        # Handle the fact that we're discretizing the time step
        #  https://en.wikipedia.org/wiki/Discretization#Discretization_of_linear_state_space_models
        Ad = scipy.linalg.expm(self.A*dt)
        Bd = np.dot(np.dot(np.linalg.inv(self.A), (Ad-np.eye(self.q))), self.B)

        # this code will be called every timestep
        def step_legendre(t, x, state=state):
            state[:] = np.dot(Ad, state) + np.dot(Bd, x[None, :])
            return w.dot(state).T.flatten()
        return step_legendre

    def get_weights_for_delays(self, r):
        # compute the weights needed to extract the value at time r
        # from the network (r=0 is right now, r=1 is theta seconds ago)
        r = np.asarray(r)
        m = np.asarray([legendre(i)(2*r - 1) for i in range(self.q)])
        return m.reshape(self.q, -1).T

import nengolib

model = nengo.Network()
with model:
    stim = nengo.Node(lambda t: np.sin(t*2*np.pi))

    delay = nengo.Node(None, size_in=2)

    theta = 0.3
    order = 10

    dt = 0.001
    nengo.Connection(stim, delay[0], synapse=LegendreDelay(theta, order))
    nengo.Connection(stim, delay[1], synapse=nengolib.synapses.LegendreDelay(theta, order))

    p = nengo.Probe(delay)

sim = nengo.Simulator(model)
with sim:
    sim.run(2.0)
plt.figure(figsize=(10,4))
plt.title(f'LegendreDelay(theta={theta}, order={order})')
plt.plot(sim.trange(), sim.data[p]);

The implementations seem to behave identically for smaller values (the orange and blue lines are right on top of each other):

image

image

Any ideas what the difference could be between these implementations?

arvoelke commented 3 years ago

Is the first part of your post using the Nengo implementation or the Nengolib implementation? If it's the latter I think it is converting back and forth unnecessarily between transfer function and state space forms. I had some issue or TODO for this. (Replying from phone.)

arvoelke commented 3 years ago

https://github.com/arvoelke/nengolib/issues/168