lava-nc / lava

A Software Framework for Neuromorphic Computing
https://lava-nc.org
Other
529 stars 136 forks source link

Create a new Buffer class that can flexibly combine the functions of current RingBuffer, Read, and Write. #771

Open tim-shea opened 11 months ago

tim-shea commented 11 months ago

Issue Number: #770

Objective of pull request: Create a new class Buffer that will dynamically create ports and vars as needed to allow the user to connect arbitrary inputs, outputs, and vars to pre-allocated numpy memory buffers. This will significantly simplify the the I/O package and make it easier to stimulate or record from networks.

Pull request checklist

Pull request type

Does this introduce a breaking change?

tim-shea commented 11 months ago

I may be misunderstanding this. At least for output from a network, this seems to be overlapping with the Monitor Process. Would it make sense to add any output-related functionality to the Monitor?

This is basically a refactor of the code from the current RingBuffer, Read, and Write processes. Read and Write definitely do overlap with Monitor, and it would be interesting to better understand how to eliminate that duplication. I started a refactor on Monitor a few months ago to turn it into a process with proper ports, but didn't make it far enough to commit anything. I'd be very interested to have a discussion to better understand Monitor architecture and try to reconcile these different I/O bits.

I am somewhat confused by the API. It would expose Lava Vars to the application scope, independent of the Process that it was generated from. This may break up the concept of a Process too much. Maybe there is a way to add the Vars to the Buffer Process? I'm guessing you did not do this on purpose - let's discuss why. :)

I don't know what you mean by 'application scope'. The goal here is not to alter the extent to which Vars are exposed, only to simplify the usage of the existing Read and Write processes which act like input and output buffers for Vars. Are you familiar with Read and the PyRead models in source.RingBuffer? This should replace those.

tim-shea commented 11 months ago

I am somewhat confused by the API. It would expose Lava Vars to the application scope, independent of the Process that it was generated from. This may break up the concept of a Process too much. Maybe there is a way to add the Vars to the Buffer Process? I'm guessing you did not do this on purpose - let's discuss why. :)

Ahh I think I figured out what you mean by exposing vars while responding to your other question. In the sense that my application code could end up with a Var called u, yes I see how that looks like it breaks down the encapsulation. The thing is, that reference to u is not the same u from the LIF process, it's a buffer into which u (the one from the LIF) will get written on each timestep. We return that var just to get around a bit naming ugliness (explained above) by giving the user a handy reference to their buffered data. Since the buffered version of u is always either input data specified by the user or output data which will be manipulated by the user, I don't see any issue with encapsulation there, although I suppose we could in theory not return the Var in the input case, since the initialization data goes in the method call (but then, what if I want to rerun with different data?) and return a Future (in the output case) to ensure the semantics of what you should do with "u" are clear.