-
The library is mostly constexpr, but maybe we can make it better by using cudaMalloc something like that to allocate heap memory on the GPU?
-
### Describe the feature you'd like to request
It would be nice if recognize could support AMD GPUs, not only NVIDIA CUDA.
### Describe the solution you'd like
It would be nice if recognize could s…
-
### What is the issue?
## Description:
I am using Ollama in a Docker setup with GPU support, configured to use all available GPUs on my system. However, when using the NemoTron model with a simp…
-
How can tequila implement support for CUDA-Q?
NVIDIA/cuda-quantum; CUDA-Q:
- src: https://github.com/NVIDIA/cuda-quantum
- docs: https://nvidia.github.io/cuda-quantum/latest/
- docs: https://nv…
-
### Describe the bug
Hey! I am learning to use SYCL but I encountered a little issue when using `sycl::atomic_ref::exchange`. Things work fine on CPU, but when I switched to GPU even a very simple te…
-
Hi!
I have built the docker image from the provided [Dockerfile with cuda11.6](https://github.com/open-mmlab/OpenPCDet/blob/master/docker/cu116.Dockerfile). Now I am trying to run models on an H10…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch…
-
so i think we're close to support cuda (some people used screenpipe with cuda), we have cuda libs in ollama dir
**describe the definition of done**
- [ ] cuda support on windows
- [ ] cuda sup…
-
### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.15.0
### Custom code
No
### OS platform and distribution
Ubant…
-
### 🐛 Describe the bug
Currently, torch.cuda doesn't support changing `CUDA_VISIBLE_DEVICES` on the fly, a demo:
```python
import os
import torch
os.environ["CUDA_VISIBLE_DEVICES"] = ""
prin…