-
### 🚀 The feature, motivation and pitch
### Description:
I'm currently integrating the PyTorch allocator with the RMM allocator. I rely on `torch.cuda.max_memory_allocated` and `torch.cuda.reset_p…
-
A project-oriented allocator of memory engine :
* A set of configuration variables are defined before build, to match expectations of the firmware, e.g. :
* a DMA capable memory buffer for a TF…
-
While working on the new allocation scheme for structEntry, I've ran into the need for a persistent dataGet function (ie, one that does not clear its data)
Example code demonstrating use case
``…
-
The current memory allocator we use is very primitive and has both poor performance, while being prone to fragmentation. Rather than designing our own allocator from scratch (which would be a waste of…
-
In the aarch64 architecture, atomic instructions have alignment requirements for memory addresses. If these requirements are not met, a bus error may occur.
In C++ code, most objects are created in t…
-
### 🐛 Describe the bug
I run the following code in a notebook:
```
import torch
import time
# Reserve 2G GPU RAM
mem_ptr = torch.cuda.caching_allocator_alloc(1024*1024*1024 * 2)
# Sleep 5…
-
### 🔎 Search before asking
- [X] I have searched the PaddleOCR [Docs](https://paddlepaddle.github.io/PaddleOCR/) and found no similar bug report.
- [X] I have searched the PaddleOCR [Issues](https://…
-
Currently our memory estimation works only with `torch's` default memory allocator, we should make sure it works with RMM allocator plugged in too.
Like ensure below works for RMM backed Pytorch a…
-
is there a clean way to force Galois to use the Glibc memory allocate (malloc/new) instead of the Galois memory allocators?
-
useful for, say, Redox and CGC