-
# TODO
- [ ] Thread safe memory pool
- [x] Rust FFI layer
- [x] .NET FFI layer
- [x] .NET API for interacting with memory through `Span`
- [x] Rust API for copying memory pool to owned `Vec`
-…
-
```
Optimize ADDA for shared memory hardware (e.g. multi-core processors),
using standard MPI functions and/or OpenMP directives.
```
Original issue reported on code.google.com by `yurkin` on 24 Dec…
-
Actually rather than doing the whole dance with creating a shared memory mapping on the Wine side in `proto/wine`, we can create a `c:/opentrack-memory-mapping.dat` of the right size and map it from b…
-
**Description**
The [docs](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/protocol/extension_shared_memory.html) state that:
`Using shared memory instead of sending t…
-
Since jetson supports triton inference server, I am considering applying it.
So, I have a few questions.
1. In an environment where multiple AI models are run in Jetson, is there any advantage to …
-
In the nginx-module-vts, when using vhost_traffic_status_filter_by_set_key, it leads to continuous growth of shared memory. Is it possible to periodically clean up the data in shared memory, for examp…
-
**Describe the feature**
When training on a COCO format dataset only one copy of the dataset annotations should be loaded into RAM by the primary process and all other GPU process dataloaders should …
-
I’m unable to use num_workers>0 in Pytorch Dataloader. The reason is the low shared memory available (64M).
Can we re-direct or override the path used internally **(/dev/shm)** by Pytorch?
Exampl…
-
### 🐛 Describe the bug
Hello,
Is the following to be expected for Tensors? Note, I am not using torch.multiprocessing and I am not calling 'share_memory_':
outside.py:
```python
def train…
-
Thank you for this awesome library. I really like the idea about LRU caching! I would like to know is it possible to have a single (shared) memory cache when nodejs app is running in cluster mode – ea…