-
Hi, very interested to see this happening. Is there any chance this will support CUDA.jl?
-
How can tequila implement support for CUDA-Q?
NVIDIA/cuda-quantum; CUDA-Q:
- src: https://github.com/NVIDIA/cuda-quantum
- docs: https://nvidia.github.io/cuda-quantum/latest/
- docs: https://nv…
-
### First, confirm
- [X] I have read the [instruction](https://github.com/Gourieff/comfyui-reactor-node/blob/main/README.md) carefully
- [X] I have searched the existing issues
- [X] I have updated t…
-
This was supposed to have a CUDA version too?
-
I tried to run 'test_net.py' file, and I got the error:
return self._aggregate(person_boxes, query, person_key, object_key, hand_key, mem_key)
File "/media/data4/home/HIT/hit/modeling/roi_heads/ac…
-
Hi,
First, I want to thank you for creating this repository -- it looks extremely useful.
I am currently trying to run turbozero on a LambdaLabs GPU instance which runs CUDA12. I followed your i…
-
Hi,
Thank you for this amazing work. I wanted to set up a conda environment to run inference with the model you provided but I keep running into issues. I've followed the steps in the Dockerfile a…
-
Fix the warning:
```
FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
```
by replacing everywhere in the codebase:
```di…
-
Hello author,
I hope you're doing well. I'm encountering an issue that seems to be related to KNN, but it's peculiar in that the error only occurs when I run the program in debug mode; it doesn't h…
-
Following the instructions to clone and build the whisper docker image, I get this error on my Ubuntu 24.04 laptop with an nVidia RTX A2000 GPU:
```bash
$ docker run --gpus all -it -v ${PWD}/mode…