NVIDIA / cuda-python

CUDA Python Low-level Bindings
https://nvidia.github.io/cuda-python/
Other
811 stars 63 forks source link

Internal memory allocation management and garbage collection #12

Closed AbdulShabazz closed 2 years ago

AbdulShabazz commented 2 years ago

Great project guys! I have a single request: Instead exposing the cudaMemoryAlloc etc APIs and then leaving it up to the programmer to perform garbage collection- could the memory management strategy happen internally?

Because I already can foresee a plethora of issues, wherein programmers forget to account for threads left on the kernel, along with memory leaks, access violations, segment faults etc which already happen for operating systems however now we introduce these absenteeisms to the GPU.

On a side note: Do you have any plans either with the foundries (AMD, Intel, TSMC, etc) for GPU based Processor-In-Memory ( PIM ) architecture or are these experimentals exclusive to the RAM developers ( Samsung ) only ?!

Regards

vzhurba01 commented 2 years ago

These current bindings try to be as close to C as possible, but there will be a more "object oriented" API layer that's built on top of these bindings. The goal of this new layer would be to make APIs a lot more Pythonic and so some memory management strategy would likely be used.

9 talks a little bit about the "object oriented" layer, and the doc is expanded to mention this too: https://nvidia.github.io/cuda-python/overview.html#future-of-cuda-python

With regards to the side note, I couldn't find a comment on it and don't have answer.

Closing.