Open Lucas-Hardy opened 4 months ago
I met the same problem in pangu on my device 4060 32G GPU today,but yestaday the model work. so weird
I tried creating a new Python environment and after pip install ai-models , l installed onnxruntime via conda, suspecting that the issue might be related to the version of numpy (the version running smoothly for me is 2.0.0).
I also met the problem of insufficient GPU memory, I'm wonder if there are any methods to decrease the batch size when doing the prediction
Hi
I'm trying to set up GraphCast and Pangu to run on a 3060 12GB GPU and am getting memory allocation errors for both models.
Pangu:
GraphCast:
I am using Cuda 12.4 in Pangu and 12.3 with GraphCast, I have tried using Cuda 11 and it does not recognise my GPU. I am using cudnn=8.9.7.29. I have also tried setting XLA_PYTHON_CLIENT_PREALLOCATE=false, setting XLA_PYTHON_CLIENT_MEM_FRACTION to smaller values and XLA_PYTHON_CLIENT_ALLOCATOR=platform. The model also runs fine on the CPU, just very slow. Is there a fix to this or it just simply that my GPU has not got enough VRAM?
Thanks.