Hi all, I have a question to discuss with anyone who is interested in mobile applications. I have trained a caffemodel and the result is good(~35FPS) when I test it with my desktop, which is equipped with Nvidia Titan XP. But when I transform all these work into Nvidia Tx2, it can work but the detection speed is slow (~2FPS). To solve this problem, I get an idea. Since I have got the trained model, I can convert the forward pass into C++/python scripts and get rid of caffe/caffe2 framework. Anyone has ever done same kind of things before and do you think it's workable and can improve detection speed? Any discussion is appreciated.
Thx~
Hi all, I have a question to discuss with anyone who is interested in mobile applications. I have trained a caffemodel and the result is good(~35FPS) when I test it with my desktop, which is equipped with Nvidia Titan XP. But when I transform all these work into Nvidia Tx2, it can work but the detection speed is slow (~2FPS). To solve this problem, I get an idea. Since I have got the trained model, I can convert the forward pass into C++/python scripts and get rid of caffe/caffe2 framework. Anyone has ever done same kind of things before and do you think it's workable and can improve detection speed? Any discussion is appreciated. Thx~