Open jardiamj opened 2 years ago
I think we need to schedule an "architecture day" (or, maybe "architecture cleanup day") for mid-summer 2022. The Graphitti architecture, at the top level, is greatly improved from BrainGrid. There are some parts of the envisioned architecture that aren't yet fully implemented or universally used (new Recorder classes, for the former, and OperationManager for all major operation categories, for the latter), but the bones are there. Now we need two related things:
And we need a documentation update that provides a page for each subsystem that describes the top-level interface/implementation, so that work at the subclass level will be consistent.
I would imagine that @jardiamj, @stiber, @rashwini21, @PoojaPal2021, and Divya (not yet accepted repo invite) should be at that meeting.
sure. This meeting is absolutely necessary.
Preferred timings:
Thanks, Pooja Pal
On Wed, Jun 8, 2022 at 10:26 AM Michael Stiber @.***> wrote:
I think we need to schedule an "architecture day" (or, maybe "architecture cleanup day") for mid-summer 2022. The Graphitti architecture, at the top level, is greatly improved from BrainGrid. There are some parts of the envisioned architecture that aren't yet fully implemented or universally used (new Recorder classes, for the former, and OperationManager for all major operation categories, for the latter), but the bones are there. Now we need two related things:
- Detailed cleanup. Some data members and methods are in the wrong classes.
- Improved abstraction and removal of near-duplicate (or unnecessary) members. We need to set things up so that we can add the second, ESCS, simulation domain and there are still some core members that are only in the neuro classes, that need to be generalized in concept and brought in at the top level for specialization within the domain. I think that summation points are an example of this.
And we need a documentation update that provides a page for each subsystem that describes the top-level interface/implementation, so that work at the subclass level will be consistent.
I would imagine that @jardiamj https://github.com/jardiamj, @stiber https://github.com/stiber, @rashwini21 https://github.com/rashwini21, @PoojaPal2021 https://github.com/PoojaPal2021, and Divya (not yet accepted repo invite) should be at that meeting.
— Reply to this email directly, view it on GitHub https://github.com/UWB-Biocomputing/Graphitti/issues/351#issuecomment-1150194388, or unsubscribe https://github.com/notifications/unsubscribe-auth/AYCKRICZXXAGLBNEUZQLYPTVODJTPANCNFSM5X7T7S7A . You are receiving this because you were mentioned.Message ID: @.***>
Yes , Definitely.
I am available morning or evening(weekdays) or Saturday(If everybody is okay).
On Wed, Jun 8, 2022 at 10:36 AM Pooja Pal (PoojaPal2021) < @.***> wrote:
sure. This meeting is absolutely necessary.
Preferred timings:
- Weekdays : Morning 7 to 8.30 AM and 6 to 8pm OR
- Weekend: Any Saturday - I am available full day as I will not be at the office.
Thanks, Pooja Pal
On Wed, Jun 8, 2022 at 10:26 AM Michael Stiber @.***> wrote:
I think we need to schedule an "architecture day" (or, maybe "architecture cleanup day") for mid-summer 2022. The Graphitti architecture, at the top level, is greatly improved from BrainGrid. There are some parts of the envisioned architecture that aren't yet fully implemented or universally used (new Recorder classes, for the former, and OperationManager for all major operation categories, for the latter), but the bones are there. Now we need two related things:
- Detailed cleanup. Some data members and methods are in the wrong classes.
- Improved abstraction and removal of near-duplicate (or unnecessary) members. We need to set things up so that we can add the second, ESCS, simulation domain and there are still some core members that are only in the neuro classes, that need to be generalized in concept and brought in at the top level for specialization within the domain. I think that summation points are an example of this.
And we need a documentation update that provides a page for each subsystem that describes the top-level interface/implementation, so that work at the subclass level will be consistent.
I would imagine that @jardiamj https://github.com/jardiamj, @stiber https://github.com/stiber, @rashwini21 <https://github.com/rashwini21 , @PoojaPal2021 https://github.com/PoojaPal2021, and Divya (not yet accepted repo invite) should be at that meeting.
— Reply to this email directly, view it on GitHub < https://github.com/UWB-Biocomputing/Graphitti/issues/351#issuecomment-1150194388 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/AYCKRICZXXAGLBNEUZQLYPTVODJTPANCNFSM5X7T7S7A
. You are receiving this because you were mentioned.Message ID: @.***>
— Reply to this email directly, view it on GitHub https://github.com/UWB-Biocomputing/Graphitti/issues/351#issuecomment-1150204270, or unsubscribe https://github.com/notifications/unsubscribe-auth/AWCCTBFQ4KL26VHBQOXZG5TVODKZHANCNFSM5X7T7S7A . You are receiving this because you were mentioned.Message ID: @.***>
We should have this conversation before resolving #136. That issue, as it's written, is quite small, but it connects to how we might design the replacement for dynamic arrays on the CPU, how that datatype will support the desired revision to the Recorder classes, whether we can encapsulate GPU data allocation/deallocation/copying within that datatype, etc. I'll send out some possible meeting times.
OK, here is a summary of our first architecture day whiteboarding. Probably not terribly understandable if you weren't there:
Following up on the previous comment, here is a summary of our meeting.
First of all, this meeting focused on the "middle level" data architecture for variables that:
Recorder
classes (whether they are or not being irrelevant; just that this is a possibility)We have been using the [EventBuffer](https://github.com/UWB-Biocomputing/Graphitti/blob/master/Simulator/Vertices/EventBuffer.h)
class to explore some ideas relevant to the above and this has helped clarify our thoughts regarding some of these issues. In particular, we have been using it to consider architectures for replacing dynamically allocated arrays and providing a pair of public interfaces: one simulation-facing (providing an array-like interface, plus additional, specialized event queuing methods in this particular case) and one Recorder
-facing (providing a standard interface to allow for an epoch of data to be retrieved and written to a file). This class works on the CPU and we are close to getting it working with the existing GPU code, which brings us to the first sub-task in the image above (I will be creating and assigning issues for these):
EventBuffer
implementation. We are implementing this class first in a way that doesn't require any modification of GPU code. Instead, we are writing CPU-side code to copy to-from the GPU side that translates between the CPU and GPU data structure reorganizations. #354 #339 We currently view the above as the easiest way to demonstrate replacement of dynamic arrays on the CPU by vector
(with those vectors' size being set at initialization time and never changing during simulation execution) and implementing a generic interface for the Recorder
classes. This brings us to the next step:
EventBuffer
organization. While this means throwing away the translation on copy code previously implemented in #354, this allows us to separately validate code for CPU and for GPU, and does not require detailed understanding of CUDA to accomplish that earlier task. #355At this point, we will have a specific implementation of a replacement for dynamic arrays with vectors on the CPU, with copy to/from GPU arrays working. We will also have an example implementation of a generic interface for the Recorder
classes. This brings us to the next step:
EventBuffer
class to create a template superclass that isn't specialized, i.e., just implements a generic replacement for a CPU dynamic array that has support for array access for the simulation code and a Recorder
-facing interface. This is the VB<T>
class in the diagram. #129 #24We should be able to accomplish this without any additional changes to GPU code (I think), but with changes to the code that copies these variables to/from the GPU. That exercise will be beneficial, because we will then understand where all of the allocation/deallocation/copy code is for all of these variables. This brings us to the next two steps:
This has to do with the GPUVar<T>
and X<T>
classes in the diagram. The diagram is attempting to show a separation of code for CPU and GPU builds, with the GPUVar<T>
class just including members relevant to GPU simulations. The X<T>
class then is a class that:
Unanswered questions here include:
GPUVar<T>
objects. We still need the struct on the GPU side, for that code to access the GPU arrays (otherwise, we need to pass every GPU array pointer to a CUDA kernel, instead of passing a single struct pointer). We therefore still need to retain the GPU struct address on the CPU side (if only for allocation, initialization, and deallocation). Maybe we duplicate the GPU array addresses within the GPUVar<T>
objects and the CPU-side struct? Or maybe we have the CPU-side struct be a container that is aware of the GPUVar<T>
objects it holds, so it automatically includes a set of GPU addresses corresponding to those objects? This is very preliminary and just is meant to capture such thoughts for future "real work".
I am wondering if GPUModel is the best place for the following device pointers:
https://github.com/UWB-Biocomputing/Graphitti/blob/f4e82884eb9f7273aca729fbb8024f03c350ec5a/Simulator/Core/GPUModel.h#L122-L128
The code for allocating and copying memory between CPU and GPU is in subclasses of AllEdges and AllVertices, so I wonder if it would be better to place them in their respective subclass.