Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
MIT License
4.55k
stars
634
forks
source link
MiDaS 2.1 TFLite fp16 with Core ML Delegate gets wrong results #264
I have converted the MiDaS 2.1 ONNX model into TFLite float32 and float16 using ONNX2TF.
I tried the float32 and float16 models with GPU(Metal Delegate) and NPU(Core ML Delegate) on iPhone 15 Pro MAX, all good but the result is strange when using NPU(Core ML Delegate) with float16 as below:
The original image
Model produces results using Core ML Delegate with float16 model.
Model produces results using Metal Delegate with float16 model.
I have converted the MiDaS 2.1 ONNX model into TFLite float32 and float16 using ONNX2TF.
I tried the float32 and float16 models with GPU(Metal Delegate) and NPU(Core ML Delegate) on iPhone 15 Pro MAX, all good but the result is strange when using NPU(Core ML Delegate) with float16 as below:
The original image
Model produces results using Core ML Delegate with float16 model.
Model produces results using Metal Delegate with float16 model.
Since the TensorFlow Core ML Delegate official site said it support float16 and float32 model, is there any one tried the fp16 model with iOS NPU(Core ML Delegate) with normal result, or also met the same issue? The model I converted into fp16 is here