-
**Describe the bug**
At the start of the miner, the video cards are initialized, the amount of RAM is displayed incorrectly on the RTX3080 card
**To Reproduce**
Steps to reproduce the behavior:
…
-
I use efficientdet_test.py and get the result. It is so slow, but I don't know the reason.
the code is here. I don't change too much code. I test the most time spent on the backbonenet to extract the…
-
| Name | Price |
| ---- | ----- |
-
**Describe the bug**
In a system with two RTX3080, Depthmap stops at 95% with red bar.
If one RTX3080 is disabled in devicemanager, it runs without problems.
This behaviour is not present on 2019.2…
-
Hi there, will you make it possible to dual mine RVN/CFX + Eth on Windows OS for RTX 3080 when it is not the primary display adapter? E.g. when it is used for solely mining and integrated graphics is …
ghost updated
2 years ago
-
Time per batch iteration on the same experiment (Siamese network) setup.
RTX2080Ti - 0.5 sec.
TitanX - 0.69 sec.
RTX3080 - 0.72 sec.
RTX 2070-Super -- 0.77 Sec
Is it because Pytorch/CUDA e…
-
**Describe the bug**
When I ran colmap feature extraction using GPU, I encountered an error as below:
```
...
CuTexImage::BindTexture: the provided PTX was compiled with an unsupported toolchain.
…
-
My GPU is RTX3080, but when I use the command `sudo sh ./scripts/inference_cogvideo_pipeline.sh`, the following error occurs
```shell
RuntimeError: CUDA out of memory. Tried to allocate 54.00 MiB (G…
-
Installation works perfectly, I load into the game and I'm getting 120fps at 1440p where before I was just scraping 60 - brilliant! But after 15-20 seconds the game crashes, or freezes my PC so I have…
-
HI
using
nanominer cuda 3.3.12
Nvidia driver 472.12
Windows uptodate Build 19042.1237 Windows Feature Experience Pack 120.2212.3530.0
Getting errors using the Nvidia GPU without the Nvidi…