berkeleyauv / robosub_ws

All the code used to compete in the Robosub competition
3 stars 1 forks source link

Figure out if Gazebo can be run headless, and if images can be rendered #35

Closed AVSurfer123 closed 3 years ago

AVSurfer123 commented 4 years ago

This would be good so that we could do some sort of integration testing with Travis/Gitlab, and also do testing without a GPU. Some links that might help: http://wiki.ros.org/simulator_gazebo/Tutorials/RunningSimulatorHeadless https://answers.gazebosim.org//question/22030/run-gazebo-headless-on-aws-and-render-locally-with-the-gzclient/

marcogelle commented 4 years ago

So far, to launch gazebo without the GUI, changing the value to "false" in \<arg name="gui" default="value"/> inside descriptions/vortex_descriptions/launch/robosub_world.launch (or whatever relevant launch file) works. It is found near the top of the file. However, the windows for the camera inputs still pop up. I am currently looking for a way to fix this. I am also looking for an easier way to run Gazebo headless without changing code itself, such as command line options.

marcogelle commented 3 years ago

Update: Run roslaunch vortex_descriptions robosub_world_sub.launch gui:=false camerafront:=0 cameraunder:=0 to run the simulator headless. The gui argument toggles the main window, while camerafront and cameraunder toggle the camera windows. This only works on robosub_world_sub.launch on the simulator_data branch right now. If on other branches, try testing this out on robosub_world.launch with roslaunch vortex_descriptions robosub_world.launch gui:=false. http://gazebosim.org/tutorials?tut=ros_roslaunch is where I referenced how to use command line options with roslaunch.

marcogelle commented 3 years ago

I verified that the images can be rendered through OpenCV with the script show_images.py, even with Gazebo running headless.