-
There appears to be an inconsistency in the documentation regarding GPU usage configuration in AutoGluon. The documentation suggests that setting num_gpus=1 within predictor.fit() should enable GPU us…
-
### 1. Issue or feature description
只要带nvidia.com/gpu相关配置启动pod,
然后进入bash, 输入nvidia-smi ,就会报 Segmentation fault
### 2. Steps to reproduce the issue
1. 使用 下面命令安装hami
helm install hami hami-charts/h…
hxh71 updated
2 months ago
-
### Your current environment
```text
The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch…
-
### What is the issue?
When calling llava models from a REST client, setting temperature cause the ollama server hangs until process is killed.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ol…
-
### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
Whatever I write in, if i try to click on the run button, it will run but it doesn't show any outp…
-
Here is the setup of my AKS cluster:
AKS Versions: 1.29.2
type of node pools :3 , system pool, general node pool, and GPU
tried NVIDIA driver plugins: Nvidia device plugin and GPU operator
OS I…
-
### What Operating System(s) are you seeing this problem on?
Windows
### Which Wayland compositor or X11 Window manager(s) are you using?
_No response_
### WezTerm version
20240812-215703-30345b3…
-
### Your current environment
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS:…
-
_Originally written by **ttvKingDarkTurtle | 76561198296125427**_
Game Version: 1.2.0364
*===== System Specs =====
CPU Brand: AMD Ryzen 9 7950X 16-Core Processor
Vendor: AuthenticAMD
GPU …
-
There is a discussion related to docker #211
But for docker it is expected that root-daemon has access to all gpus.
In my case, I run podman within SLURM, which uses cgroups to control acccess to…