-
When CUDA is not available at runtime, but `cv-rs` is built with the "cuda" feature enabled, then the program emits this message: OpenCV(3.4.1) Error: Gpu API call (no CUDA-capable device is detected)…
-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmdetection3d/issues) and [Discussions](https://github.com/open-mmlab/mmdetection3d/discussions) but cannot get the expec…
-
### Checklist
- [X] I added a descriptive title
- [X] I searched open reports and couldn't find a duplicate
### What happened?
Hello! After using conda to install CUDA into my environment (via `con…
-
I was following the NeRF Studio documentation for installation and i came across an error while executing this line:
tiny-cuda-nn
After pytorch and ninja, install the torch bindings for tiny-cuda-…
-
### Describe the issue
How to release entire gpu memory in onnxruntime session create.
I try to release the memory by the bellow two codes are same performance.
Setting three breakpoints at con…
-
Has anyone succesfully run a qdrant binary on a runpod container? I'm trying to run qdrant on my runpod so that I can have it store my embeddings.. Here's my dockerfile for the build , I install the …
-
Hi nerfstudio guys, thanks for your excellent library!
I have a minor question, it is usually required that we have a consistent system CUDA toolkit version and Pytorch runtime CUDA version to comp…
-
### Describe the issue
How to transfer the Ort::Value obtained to cuda code for post-processing, such as a .cu file? I try to replace "Ort::Value &scores = output_tensors.at(0)" with "float* scores …
lmx9 updated
9 months ago
-
### Describe the issue
"MIOpen Error: No invoker was registered for convolution forward." happens when trying to use any model for inference with convolution codes. This is because the caching system…
-
OS is Ubuntu 16.04. Nvidia drivers installed and working fine. Nvidia drivers and CUDA work fine in nvidia-docker. Using driver 384.111 and CUDA 9.0 for testing. Slurm+shifter working fine.
But, u…