-
### 🐛 Describe the bug
I found a bug when trying to run a model with vllm across 2 GPU MIG partitions. I traced it back because it wasn't too hard to find but I don't know how the devs of vllm wo…
-
Dear ManiSkill PPO Developers,
I hope you are doing well. I am currently working on training a PPO agent using the ManiSkill environment and have encountered a CUDA Out of Memory (OOM) error. I wou…
-
### NVIDIA Open GPU Kernel Modules Version
525.85.05
### Does this happen with the proprietary driver (of the same version) as well?
Yes
### Operating System and Version
Linux Mint 21.1…
-
### Checklist
- [X] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
- [X] The issue exists on a clean install…
-
Really usefull extensions. On dev nf4 (RTX 4070, setting 9500 max vram for model) Great speedup if no model change between generation.
previously:
100%|████████████████████████████████████████████…
-
### What happened?
*Title**: Feature Request: Add option for explicit Metal device selection on macOS
---
**Description**:
Hi,
I'm using `llama.cpp` on a macOS system with multiple GPUs, …
-
作者你好:
非常感谢有如此令人惊叹的成果,我想问一下:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 44.00 MiB (GPU 0; 23.54 GiB total capacity; 21.31 GiB already allocated; 67.25 MiB free; 22.04 GiB rese…
-
When trying to start processing i get error:
Updating 668e87f9..c3366a76
error: Your local changes to the following files would be overwritten by merge:
style.css
Please commit your cha…
-
With #388 being merged, LACT now has basic support for Nvidia GPUs through NVML (nvidia management library). This issue tracks the feature support for nvidia.
- [X] Information reporting
- The U…
-
Need to evaluate jupyter hub resource allocation for GPU usage (type, size, etc.). Potentially use Elyra pipelines for configuration (memory, etc) - open ticket in Elyra as support is needed ; also l…