-
rclcpp/include/rclcpp/allocator/allocator_common.hpp has two severe correctness issues in memory allocation:
1. retyped_allocate takes the "size" parameter (which is size in bytes as defined by rc…
-
**[Original report](https://bitbucket.org/chromiumembedded/cef/issues/3095) by Henri Beauchamp (Bitbucket: Henri Beauchamp).**
----------------------------------------
Greetings,
I recent CEF/Chrom…
-
```
I had the requirement to limit Lua VMs to a maximum memory usage for better
sandboxing. This is actually pretty easy thanks to Lua offering a way to supply
a custom allocator. I extended JNLua a…
-
```
I had the requirement to limit Lua VMs to a maximum memory usage for better
sandboxing. This is actually pretty easy thanks to Lua offering a way to supply
a custom allocator. I extended JNLua a…
-
```
I had the requirement to limit Lua VMs to a maximum memory usage for better
sandboxing. This is actually pretty easy thanks to Lua offering a way to supply
a custom allocator. I extended JNLua a…
-
### 🐛 Describe the bug
I am using a custom python based store to address the issue of torch's `TCPStore` running out of file handle resources when running on a large number of GPUs. However, torch …
-
**Describe the bug**
Starting with https://github.com/open-source-parsers/jsoncpp/commit/30170d651c108400b1b9ed626ba715a5d95c5fd2, the library uses memset_s for the secure string allocator.
This…
-
Hi! I met a question when I generated the engine for yolov7 in the GPU Tesla T4. The environment I configured is DeepStream 6.0, TensorRT 8.4.2, CUDA 11.4, and a version of cuDNN compatible with CUDA …
-
I have `rustc` installed via `rustup`. Additionally, I'm using [mimalloc](https://github.com/microsoft/mimalloc) as my allocator, loaded with the `LD_PRELOAD` environment variable. Taking a hello-…
-
### 🐛 Describe the bug
I compiled the Llama3.1 models to executorch: https://huggingface.co/l3utterfly/Meta-Llama-3.1-8B-Instruct-executorch
It seems they use a lot of extra memory during inferenc…