dwofk / fast-depth

ICRA 2019 "FastDepth: Fast Monocular Depth Estimation on Embedded Systems"
MIT License
926 stars 189 forks source link

Performance of the model on mobile/browser devices #46

Open 10dimensions opened 3 years ago

10dimensions commented 3 years ago

I tried converting the .pt (torch) model to both .onnx and tfjs formats. To correspondingly deploy them on browser as well on a node server (on CPU).

And the inference speeds average around 1500-1700 ms?

At the same time I found an iOS example on fastdepth.github.io which averages to an excellent 40 fps.

Am I missing anything on my browser/cpu implementations? Any additional processing to be done? Thanks

martinjuhasz commented 3 years ago

@10dimensions how where you able to deploy this on something else than the tx2? how do you recompile the models for other platforms? would love to hear how this is done as i'm currently failing on that.

niharsalunke commented 3 years ago

Hi @martinjuhasz, I have set up the model on my pc without tx2. Are you still interested to know more about it?

10dimensions commented 3 years ago

Hi @martinjuhasz

Yes. I did try to convert the .pth model to onnx, and also tfjs graph (+ bins)

On tensorflowjs (node runtime), I was able to compile and run it on CLI. But still the fps was very low, around 0.5 fps, as opposed to the expected 20+ fps

martinjuhasz commented 3 years ago

@10dimensions thanks for the info

@niharsalunke yeah, still interested!

dpredie commented 2 years ago

@10dimensions can you share the onnx model?

10dimensions commented 2 years ago

@dpredie Nope, I don't at the moment. But I guess PyTorch has converters in-built. https://deci.ai/resources/blog/how-to-convert-a-pytorch-model-to-onnx/