Open kcINual opened 8 years ago
@kcINual I haven't seen anything related on other websites, but I'd like to know this as well. I've been trying to get py-faster-rcnn to work on the TX1 as well.
Has any of you (@kcINual or @WardBenjamin) been able to test Faster R-CNN on Jetson X1? I followed all build instructions on [https://github.com/rbgirshick/py-faster-rcnn] but when I use demo.py (VGG16_faster_rcnn_final.caffemodel) an error occurred, I think that it is due the size of Jetson X1 memory. error: F0525 19:51:40.571739 15622 math_functions.cu:79] Check failed: error == cudaSuccess (4 vs. 0) unspecified launch failure
@fernandoFernandeSantos Sorry, I can't help with your issue but it sounds like you've managed (for the most part) to install py-faster-rcnn on the TX1. I'm trying to do the same at the moment, perhaps when I manage to I'll be able to help with your problem!
For the installation process I'm wondering whether you used Anaconda or the default Python? Furthermore, did you install Nvidia's Jetpack 2.2 before installing py-faster-rcnn? Thanks in advance
Thanks @hengck23, I did what you proposed and everything went fine. @CWOA sorry for the delay, I'm using default Python. Actually, I was using Jetpack 2.2, but I'm going to update it with version 2.3, I'll post here if something wrong happens with Py-faster-rcnn.
@fernandoFernandeSantos You should try smaller model like ZF because you have 4GB gpu memory in jetson tx1.
@fernandoFernandeSantos old thread - I know. I am working on implementer Faster R-CNN on the TX2, but it seems it cannot run due to some memory related issues (which is weird). Did you manage to this running on the TX1?
@JesperChristensen89, I was able to run Faster RCNN using both hardware (X1 and X2). Also, I used VGG16 and ZF models on both hardware. I do not remember which forum I found a solution for the memory problem since I tried lots of options. You can start from these:
https://devtalk.nvidia.com/default/topic/1004976/faster-r-cnn-on-jetson-tx1/ https://devtalk.nvidia.com/default/topic/974063/jetson-tx1/caffe-failed-with-py-faster-rcnn-demo-py-on-tx1/ https://devtalk.nvidia.com/default/topic/974063/jetson-tx1/caffe-failed-with-py-faster-rcnn-demo-py-on-tx1/1
I also updated the CUDNN for v5.0, I think it could have changed something.
https://github.com/rbgirshick/caffe-fast-rcnn/issues/14 https://github.com/rbgirshick/py-faster-rcnn/issues/237 https://github.com/rbgirshick/py-faster-rcnn/issues/383
@fernandoFernandeSantos Hi fernanado , just wanted to clarify that few points . 1.Did u run faster rcnn on tx1 /tx2 board using the py caffe by rbgirshick
Just to see if anyone tried to port the Faster RCNN to run on Jetson TX1?
It would be very interesting to see how it performs on this little Tegra X1 powerhouse :)
Yet the latest toolkit on TX1 only supports cuDNN 4.0 RC1 and CUDA 7.0 (7.0.71). Wondering if it works with py-faster-rcnn.