-
Hi,
I have a multi node setup with multiple GPU. I was able to get the cluster but I don't see the remaining GPU's from each nodes. How do I do that. Also observed below error while using llama…
-
### What happened?
When using llama.cpp models (e.g., granite-code and llama3) with Nvidia GPU acceleration (nvidia/cuda:12.6.1-devel-ubi9 and RTX 3080 10GB VRAM), the models occasionally return nons…
-
Hi,
I build the source code using the `Makefile` without any changes. I pulled the `1.15.0` tag. The build script was executed successfully and outputted all the binary files.
But when I try to …
-
A warning upon first running the `whisper` model clued me in to it not using hardware acceleration:
> UserWarning: FP16 is not supported on CPU; using FP32 instead
All I had to do in order to en…
-
### Description
Running a simple JAX code resulted in segmentation fault on GCP VM.
To reproduce:
- Create an instance with GCP's [cuda12 image:](https://console.cloud.google.com/compute/images…
-
### Is there an existing issue for this problem?
- [X] I have searched the existing issues
### Operating system
Windows
### GPU vendor
Nvidia (CUDA)
### GPU model
_No response_
### GPU VRAM
_…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTor…
-
### Bug Description
When following the guide over [here](https://github.com/cyberus-technology/virtualbox-kvm/blob/dev/README.intel-sriov-graphics.md), adapting it to passthrough a dedicated GPU, a…
-
There have been a number of issues and PRs to date related to this, but we now need to get this in order and bring all those efforts up to date. There's the updated task list for supporting NVIDIAs GP…
-
## environmental
### k8s
k8s .123.7
### docker
```bash
root@g007:/var/lib/kubelet# docker -v
Docker version 20.10.24, build 297e128
```
### containerd
```bash
root@g007:/var/lib/kubelet# containerd …