Open Enuriru opened 1 month ago
This happens with fp32 model too? The fallback errors out on getting fp16 tensor it seems, which shouldn't happen if you use fp32 model.
Yes, it works with fp32 — thanks! I'm pretty new with this stuff, is there a way to run fp16 model? trying to save a bit more vram :)
I saw apple releaseed model trained for coreml, will this be supported in the future version of this custom nodes? Model: https://huggingface.co/apple/coreml-depth-anything-v2-small
I saw apple releaseed model trained for coreml, will this be supported in the future version of this custom nodes? Model: https://huggingface.co/apple/coreml-depth-anything-v2-small
I don't have any Apple devices, so probably not by me.
I'm trying to run the node on M1 MacBook Pro, getting this error:
Error occurred when executing DepthAnything_V2: The operator 'aten::upsample_bicubic2d.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable
PYTORCH_ENABLE_MPS_FALLBACK=1to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
With fallback activated, Comfy crashes with error:
Any ideas? Thanks in advance!