Open PasqualePuzio opened 1 month ago
I'm not sure about running the ONNX models on Mac (M-series chips). But it works very well on my MacBook (M2, 16GB memory) with original PyTorch weights (all versions), either by running my HF demo locally or using the notebooks in tutorials
.
I'll try the ONNX later when I have free time (in two days).
I'm not sure about running the ONNX models on Mac (M-series chips). But it works very well on my MacBook (M2, 16GB memory) with original PyTorch weights (all versions), either by running my HF demo locally or using the notebooks in
tutorials
.I'll try the ONNX later when I have free time (in two days).
Thank you. In the meantime I'll try to run the model without ONNX. I'll keep you posted.
Quick update: I've tried starting from one of your tutorials and it works just fine, so the problem seems to be caused by onnxruntime.
Great! When I first tested the ONNX conversion colab script (refer to the ONNX conversion part in the model zoo section of README), it did cost quite a lot CPU memory when I tested. And the 12 GB colab memory cannot hold for more than one time inference (I was also confused about that).
Hi,
I'm struggling to run the portrait model on my system (Mac Studio, Chip M2 Max, 32GB of Memory), the ANECompilerService keeps running forever and I have to kill it manually several times in order to get the expected result.
I've also tried to run general model but it crashes due to insufficient memory.
The general-lite model has the same issues of the portrait model.
How much memory is needed in order to run these models?
I'm using onnxruntime.