Open ozett opened 4 years ago
it seems that the nano has some limitations in the long run... even 20 fps seems to be possible..
https://www.dlology.com/blog/how-to-run-ssd-mobilenet-v2-object-detection-on-jetson-nano-at-20-fps/
---edit 20200630: had trouble to test tensorflow-object detection on the nano. but should work with tensorflow 1.x ->
i looked a little to AI_dev.py and TPU.py and cannot clearly see if you done inference on the nano itself for your benchmarks on the nano. Maybe with tensorRT? an what model?
could you help me pointing to the code where you load the specific model on the nano?
i want to overcome my problems with converting the general tensorflow-model to the nano platform... thx
edit 20200630: this seems the way to go with the nano, tensor 1.8x -> https://github.com/NVIDIA-AI-IOT/tf_trt_models
No, I've so far only used the Nano with the TPU. Its the best of the IOT class machines I've tried so far in terms or rtsp decoding. But I've not used any of the Nano GPU for AI.
I'm looking to try YOLO v3 as another add-on to do a final verification before sending an alert, but I've some medical issues to deal with that prevents me putting much effort into it at the moment.
hi, thanks for clearing up on this matter.
hope you get well soon! 💐 you could than try investigate on yolo 😄
i adopted things from this flow for my node-red and its working flawlessly https://github.com/thebigpotatoe/Node-Red-Yolo-Pets-And-People
hi, i wanted to use the jetson nano for inference, simply testing with object-detection tutorial an discoverd that the normal tensorflow-models dont work. do you have any experience or hint how to use them on the nano?
i only found this: https://github.com/AastaNV/TRT_object_detection
but maybe i dont use the nano if i have to convert the normal models first.. any suggestions realy appreciated. thx