jkulhanek / robot-visual-navigation

Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning Official Implementation
MIT License
55 stars 9 forks source link

ROS1 or ROS2 #9

Closed GongYanfu closed 2 years ago

GongYanfu commented 2 years ago

I want to know which version of ros your turtlebot used? ROS1 is installed on my robot. ROS1 supports python2.7 by default, but the readme file shows that the environment requires python3.6+. How did you solve this problem when porting the trained model to a real robot?

jkulhanek commented 2 years ago

I used ROS1 with Python3 I believe.

GongYanfu commented 2 years ago

do you not change the default python on the robot? On the robot I have created a virtual space using venv where the python version is 3.9 and have successfully executed the command: python evaluate_turtlebot.py turtlebot --num-episodes 100. I also successfully created and compiled a workspace based on your ros package in the robot's home directory. However, when executing the command: python src/ros_agent_service/src/main.py, it cannot be executed in the created virtual environment, because this will lead to the inability to find some packages about ros. However, because I installed dependencies such as deep-rl in a virtual environment, if I only execute this command under the current ubuntu user, packages such as deep-rl cannot be found. image image

GongYanfu commented 2 years ago

its executed successfully or not like this image

GongYanfu commented 2 years ago

I found solution.

jiacheng-yao123 commented 6 months ago

I found solution. Hello. I am having some problems running the real robot experiments of turtlebot2. I hope I can get your help and guidance. I am using turtlebot2 with kinect1 camera to run the code. Run the following commands.

roslaunch robot_controller start.launch

python src/ros_agent_service/src/main.py

python goal_picker/src/setgoal.py --image /home/zwz/robot-visual-navigation-master/goal/door.jpg

The following error was encountered: [ERROR] [1704981429.247813]: bad callback: <function create_send_response.<locals>.send_response at 0x7f0fe4461040> Traceback (most recent call last): File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "ros_agent_service/src/main.py", line 33, in send_response action = server.compute_step(request) File "ros_agent_service/src/main.py", line 22, in compute_step action = self.agent.act((observation, goal,)) File "/home/zwz/robot-visual-navigation-master/ros/src/ros_agent_service/src/agent.py", line 57, in act obs = (_transform_image(img), _transform_image(goal)) File "/home/zwz/robot-visual-navigation-master/ros/src/ros_agent_service/src/agent.py", line 13, in _transform_image return np.expand_dims(((img.astype(np.float32) - mean) / std).transpose([2, 0, 1]), 0) ValueError: axes don't match array

According to the printout, the goal image is a 3-channel RGB image and the observation image is a single-channel grayscale image. I found the gmapping_demo.launch and 3dsensor.launch for starting the sensor based on start.launch how to start the RGB image?

img.shape (84, 84) [[ 46 51 47 ... 110 107 108] [ 53 52 26 ... 125 98 106] [ 58 42 46 ... 124 117 116] ... [101 95 98 ... 128 129 128] [ 96 106 86 ... 123 129 126] [ 91 95 72 ... 123 129 125]]

`goal.shape (84, 84, 3) [[[139 158 154] [139 158 154] [144 162 161] ... [150 170 169] [147 167 166] [145 165 166]]

[[139 158 154] [141 160 156] [144 162 161] ... [152 170 172] [149 167 169] [147 165 165]]

[[139 158 154] [142 161 157] [143 162 160] ... [155 170 176] [152 167 170] [149 165 163]] `