Open 10dimensions opened 3 years ago
@10dimensions how where you able to deploy this on something else than the tx2? how do you recompile the models for other platforms? would love to hear how this is done as i'm currently failing on that.
Hi @martinjuhasz, I have set up the model on my pc without tx2. Are you still interested to know more about it?
Hi @martinjuhasz
Yes. I did try to convert the .pth model to onnx, and also tfjs graph (+ bins)
On tensorflowjs (node runtime), I was able to compile and run it on CLI. But still the fps was very low, around 0.5 fps, as opposed to the expected 20+ fps
@10dimensions thanks for the info
@niharsalunke yeah, still interested!
@10dimensions can you share the onnx model?
@dpredie Nope, I don't at the moment. But I guess PyTorch has converters in-built. https://deci.ai/resources/blog/how-to-convert-a-pytorch-model-to-onnx/
I tried converting the .pt (torch) model to both .onnx and tfjs formats. To correspondingly deploy them on browser as well on a node server (on CPU).
And the inference speeds average around 1500-1700 ms?
At the same time I found an iOS example on fastdepth.github.io which averages to an excellent 40 fps.
Am I missing anything on my browser/cpu implementations? Any additional processing to be done? Thanks