-
The current implementation of local means no sharding/tensor parallelism, etc, and refuses to work on my dual 4090 setup. How do I enable multi gpu, or how do I enable a proper system like VLLM to run…
-
Running an _arch linux_ setup with dual GPU's
Arch Linux
Kernel 6.6.2-arch1-1
GNOME Version 45.1
nVidia Driver version 545.29.6
**personal project**
When targeting the GTX 970..
`vkCreat…
-
### Checklist
- [X] The problem is not listed in the [hardware support matrix](https://github.com/ilya-zlobintsev/LACT?tab=readme-ov-file#hardware-support) as a known limitation
- [X] I've included a…
-
## Bug description
I've experienced the following inconsistency between GPU and CPU gradient computation for `sum(abs, _)`.
```julia-repl
julia> using Zygote, CUDA
julia> rl, cplx = [0.0f0],…
-
When using PRIME the following command works for offloading zwift onto the nvidia gpu in docker.
```
docker run -d \
-e __NV_PRIME_RENDER_OFFLOAD=1 \
-e __GLX_VENDOR_LIBRARY_NAME=nvidia \
…
-
### Homepage
https://github.com/bayasdev/envycontrol/releases
### Why should this be included in the repository?
Solus is a great Linux distro and i use it on almost all my computers, but the…
-
**Is your feature request related to a problem? Please describe.**
As a researcher, I rely heavily on Windows due to its rich ecosystem of software that's invaluable for my work. However, I find Linux…
-
I'm looking to setup memories on a Raspberry Pi 5. The VideoCore VII GPU supports OpenGL ES 3.1 and Vulkan 1.2 and it's got a DUAL 4Kp60 HEVC decoder. If it's being used headless with no HDMI output, …
-
### System Info
System Info
TGI Docker Image: ghcr.io/huggingface/text-generation-inference:sha-11d7af7-rocm
MODEL: meta-llama/Llama-3.1-405B-Instruct-FP8
Hardware used:
Intel® Xeon® Platinum 8…
-
Hi! Great tool!
I attempted to use the autoeval feature on a dual RTX 3090 setup in RunPod, but it appeared that only the first GPU was utilized throughout the evaluation process.
I'm uncertain …