Closed AbdulShabazz closed 2 years ago
These current bindings try to be as close to C as possible, but there will be a more "object oriented" API layer that's built on top of these bindings. The goal of this new layer would be to make APIs a lot more Pythonic and so some memory management strategy would likely be used.
With regards to the side note, I couldn't find a comment on it and don't have answer.
Closing.
Great project guys! I have a single request: Instead exposing the cudaMemoryAlloc etc APIs and then leaving it up to the programmer to perform garbage collection- could the memory management strategy happen internally?
Because I already can foresee a plethora of issues, wherein programmers forget to account for threads left on the kernel, along with memory leaks, access violations, segment faults etc which already happen for operating systems however now we introduce these absenteeisms to the GPU.
On a side note: Do you have any plans either with the foundries (AMD, Intel, TSMC, etc) for GPU based Processor-In-Memory ( PIM ) architecture or are these experimentals exclusive to the RAM developers ( Samsung ) only ?!
Regards