-
wen running u2netp GPU not used,CPU running full
import torch
from PIL import Image
from config.conf import PROJECT_ROOT
from server.rembg.rembg import new_session, remove
rembg…
-
Can infini run on multi cards? When the requests pressure is high, the bra1 memory is not enough.
![image](https://github.com/user-attachments/assets/7d5f817e-8656-4324-9546-e5c088505c1e)
![image…
-
Hello, I will be adding a T4 also today/romorrow:
root@akash-1:~/bin# ./provider-services tools psutil list gpu
{
"cards": [
{
"address": "0000:00:01.0",
"index": 0,
"…
-
I had an issue when trying to perform a training run on the GPU, which appeared to be caused by reference and predicted data being stored on different devices leading to errors like `RuntimeError: ind…
-
### Project URL
https://pypi.org/project/onnxruntime-gpu/
### Does this project already exist?
- [x] Yes
### New limit
40 GiB
### Update issue title
- [x] I have updated the title.
### Which i…
-
Right now GPU implements Vulkan, Metal, D3D12 and D3D11 backends. Out of these APIs D3D11 is the odd one out because it has awkward support for command buffers. Why do we support it?
The main reaso…
-
Currently, gpu-tracker assumes an Nvidia gpu and calls nvidia-smi. There should be an additional parameter for `gpu_branch` that accepts 'nvidia', 'amd', or `None` which is the default. If `None`, no …
-
**Describe the bug**
Can not disabled pci gpu after enabled pci gpu
**To Reproduce**
Steps to reproduce the behavior:
1. enable pci device addon.
2. enable pci gpu
3. disable pci gpu
**Expe…
-
I tested on a server with an A30 GPU and a laptop with an RTX 3060.
I believe I followed all steps in the setup guide.
```
docker run -t --rm --gpus all -v /home/koris/BERTax/in:/in/ fkre/bertax…
-
detail:
NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4
for Kepler GPUs are removed from CUDA 12.x. how can i compile torch_xla for gpu in CUDA Version 12.X(GPU guide use CUDA1…