Open banjaminicc opened 1 year ago
Sorry for the late reply.
At my side (RTX 4090, use_fp16=True
), 960*540 input uses 5230MB (with TensorRT 10.0) or 5426MB (with TensorRT 8.6). Similar memory usage is expected on RTX 20-series or newer card.
Can you check if there's still error on the latest version?
Input res is 540p and this error appears Error: Failed to retrieve frame 0 with error: NNVISR: failed copy 8294400 bytes of CUDA memory: cudaErrorInvalidValue (invalid argument).
For cycmunet I'm assuming this means that it requires more than 12G of vram to run and there's nothing I can do about it?