-
I have an RTX 4060 graphics card, how do I deploy a gpu version of the model with this project
-
# Description
Dockerfiles for smartsim using gpus with cudnn are missing
# Justification
everyone that wants to use smartsim with gpus and docker
# Implementation Strategy
the dockerfiles in …
-
Hello,
We're testing gpu-feature-discovery on our DGX machine.
The DGX machine has two types of GPU: one is "NVIDIA-DGX-Display", and the other is "NVIDIA A100-SXM4-80GB"
Currently, `gpu.produc…
-
Hi All,
I have read through a lot of tickets here. I understand the software is optimised for A100/H100's however they are very costly. I am looking to build a single server for a research team and…
-
### Quick summary
RPCS3 instantly crashes with a segfault on wayland while using a discrete gpu.
### Details
If `QT_QPA_PLATFORM` is set on `wayland` or `wayland-egl` and any other GPU but th…
-
Whenever I try to boot AnotterKiosk on any device with an NVIDIA GPU, it just hangs on a blinking white cursor on a blank, black background.
It boots just fine on non-NVIDIA GPUs, though.
-
I am experiencing limited GPU utilization with the NVIDIA RTX 4000 Ada Gen card while running on Windows 10 1809
CPU: AMD EPYC 3251 8-Core Processor 2.5 GHz
RAM: 32GB
GPU: NVIDIA RTX 4000 Ada Gen 2…
-
Hello all,
Thanks for your great work here! When I run using `cudarc`, I get the error:
```
called `Result::unwrap()` on an `Err` value: Cuda(Cuda(DriverError(CUDA_ERROR_NO_DEVICE, "no CUDA-capab…
-
**Describe the bug**
I'm unable to compile with `--features flash-attn` for use with my 1080Ti (on an x86_64 arch linux host).
```console
$ nvidia-smi --query-gpu=compute_cap --format=csv,nohea…
-
### What happened?
I have a 3060 12Gb and a RX 6800 16Gb with 128Gb RAM.
I can run llama3:70b fine with only the 3060 plugged in. If I insert both GPUS's then llama.cpp attempts to use 24Gb of VRAM …