PRBonn / bonnet

Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
GNU General Public License v3.0
325 stars 89 forks source link

Question about deployment #27

Closed haopo2005 closed 6 years ago

haopo2005 commented 6 years ago

Hi, I have run the train and inference scripts of bonnet sucessfully on x86 sever or pc. And I aslo find you have tested on Jetson TX2. I'd like to know how to deploy your script on arm platform[nvidia PX2, other gpu devices] with easiest methods. Do you have more deployment details about nvidia-docker ? What if I will make some small change on your code, how to wrap a new docker image?

tano297 commented 6 years ago

Hi there,

In order to infer in the jetson, there is no setup needed, or docker container. Just install ROS, put the code in your catkin workspace, build, and voilà! TensorRT is the fastest backend and is already installed in the jetson if you flashed it with the Jetpack. You still need to freeze the model in your host computer and copy it over to the jetson, this is documented in then deploy_cpp documentation.

As for changes in the code and then using the docker container, the whole working directory gets copied inside the docker container when you run the image, so they reflect automatically.