Closed petergro-hub closed 7 months ago
Hi petergro-hub, logs have been sent to the email address registered to your account.
Thank you, is it possible there is some other background process running, using gpu memory/cpus? I tested the image on an exact replica of the provided specs and hardware in github and it seemed to run without issue. Looking at the logs however I see the models couldn't be loaded
Actually same question here - inference is taking about 2x slower than on our own V100 PCIE 16GB with limited 6cpu and 60GB RAM, with exact outputs expected since our model has no randomness involved.
It's possible there was an issue with the models being loaded leading to a spike in gpu ram, I m still not sure if that would be enough to lead all models to crash, but I'll close this issue now. Regarding speed, yes the servers are slower, but I believe that is somehow accounted for
I'd like the stderr logs for run 422a377c-f7dc-4d16-a483-8239b28c6cbe please
thank you