Closed veyorokon closed 3 weeks ago
Oh- it could be that you're using an RTX 6000 - non-ada, which is different than the RTX 6000 ADA. They have a similar name, but one is ada generation, and the other is from last gen, which would have compute capability 8.6.
gotcha - was wondering about that possibility - ty
Description
When using the
flux-fp8-api
with configuration.configs/config-dev-1-RTX6000ADA.json
on an RTX 6000, I receive aRuntimeError
regarding unsupportedtorch._scaled_mm
due to compute capability requirements. My environment uses the Docker imagerunpod/pytorch:2.4.0-py3.11-cuda12.4.1-devel-ubuntu22.04
.Docker Image:
runpod/pytorch:2.4.0-py3.11-cuda12.4.1-devel-ubuntu22.04
Error Details
Relevant Configuration Path
Has anyone encountered this before?