reiniscimurs / GDAE

A goal-driven autonomous exploration through deep reinforcement learning (ICRA 2022) system that combines reactive and planned robot navigation in unknown environments
110 stars 15 forks source link

execute GDAM.py #7

Open chih7715 opened 1 year ago

chih7715 commented 1 year ago

Sorry, I'm here to ask a question again I'm trying to execute GDAM.py i can't find the file

OSError: File /home/wenzhi/GDAE/Code/assets/launch/multi_robot_scenario.launch does not exist 2023-02-01 11-17-37 的螢幕擷圖 I'm not sure what's wrong I haven't connected the device yet, just trying to execute Another question is can tensorflow errors be ignored? /r1/cmd_vel What is this node?

reiniscimurs commented 1 year ago

You would require a launchfile that either launches a robot in simulation or connects to the real robot. Since the robot launch is entirely dependent on your robot setup, it is not included here. It is a file that automatically connects and launches your sensors on the robot. But for the reference, the same file call is included in https://github.com/reiniscimurs/DRL-robot-navigation where a robot is launched in a simulation.

/r1/cmd_vel is a node to wich you publish the control commands, such as linear and angular velocities, for the robot to execute

chih7715 commented 1 year ago

Can i refer to your original launchfile of real robot? let me try to figure it out.

reiniscimurs commented 1 year ago

Sorry, I do not have access to those files anymore.

You could try using the launchfile from the repo mentioned and just exclude the launching of the robot model for testing.

chih7715 commented 1 year ago

2023-02-11 16-40-24 的螢幕擷圖 I try to use slam toolbox, but always show no map received warning. If I don't use slam toolbox, can I use hector slam instead?will can't run this code?

reiniscimurs commented 1 year ago

You should be able to use any SLAM package, either Hector or Gmapping, instead of SLAM_Toolbox. However, SLAM_Toolbox has superior performance in mapping quality over other packages.

For SLAM_Toolbox issues please follow the guides on their repository: https://github.com/SteveMacenski/slam_toolbox

chih7715 commented 1 year ago

2023-02-18 17-33-55 的螢幕擷圖 I have these problems when running this program,tensorflow warnings need to be ignored? 2023-02-18 17-38-13 的螢幕擷圖 Regarding the model path,I use the model trained by td3,the path is as shown in my screenshot,what changes need to be made.

reiniscimurs commented 1 year ago

This GDAM repository is made with tensorflow implementation in mind and loads a tensorflow trained model. The TD3 repository is using PyTorch. You will not be able to directly load a pytorch model into tensorflow. Moreover, the parameters do not match between the 2 methods as GDAM input is a 23 value vector, but TD3 is a 24 value vector.

You will have to change the GDAM codebase to use the pytorch model. What you can do is swap out the tensorflow calls in GDAM with the TD3 pytorch calls and use the pytorch model instead. This should not significantly change the behavior.

chih7715 commented 1 year ago

2023-03-10 18-17-51 的螢幕擷圖 I'm not familiar with this part. Where do I need to change the settings?

reiniscimurs commented 1 year ago

I would guess your tf_tree probably does not have a connection between the map and odom nodes. You should check that in rqt. In this implementation, the connection was made using Slam_toolbox and pointing to base_link as the source of robots odometry.

chih7715 commented 1 year ago

2023-03-21 22-09-32 的螢幕擷圖 why do this here ,aIn[0,0] = (aIn[0,0]+1)/4

reiniscimurs commented 1 year ago

The output of the neural network is a tanh. Meaning, it is in range from -1 to 1. But for linear velocity action we need it to be in range 0 to 0.5. So we change the range by adding one and dividing by 4.

chih7715 commented 1 year ago

2023-03-30 21-32-05 的螢幕擷圖 "I encountered a strange situation where my goal is x=4.416 and y=-1.75, and my movement distance is x=2.12 and y=0.06. When the distance between them is less than 1.5, I confirm that I have arrived and change the goal. However, in the RViz display, my red and green dots do not seem to be in the positions I described."

chih7715 commented 1 year ago

2023-03-30 23-08-50 的螢幕擷圖 I made a modification to the line (trans, rot) = self.listener.lookupTransform('/map', '/odom', rospy.Time(0)) because Hector SLAM does not use odom as the reference frame. Instead, I used the slam_out_pose from Hector SLAM, which estimates the current position of the robot with respect to the map frame. As there is no odom frame in this case, I changed the frame_id to base_frame.

reiniscimurs commented 1 year ago

Over time the Odom and mapping frames begin to drift. That is why you have to look up the drift using the transformLookup function. Then you update the locations taking into consideration this drift. If you comment out this stage and keep static transform and rot values, you will not be able to account for the drift and your nodes will be positioned wrong. You should find a way to lookup the transform between the map frame and robots Odom frame in some other way if the specified method does not work.

chih7715 commented 1 year ago

I would like to ask, in your experimental video, it seems that a specific target location has been set, and the robot will continue to explore until it reaches the designated target location. I am wondering where this part of the code needs to be set and I only see the parameter setting of x=50 as the initial configuration for starting the robot.

reiniscimurs commented 1 year ago

The goal position is set in the GDAM_args.py file https://github.com/reiniscimurs/GDAE/blob/fc793eda8de23bed98ba3acd32908843c535510f/Code/GDAM_args.py#L47 You can see that there is an argument for setting the X and Y goal coordinates. The arguments are then passed when creating the environment https://github.com/reiniscimurs/GDAE/blob/fc793eda8de23bed98ba3acd32908843c535510f/Code/GDAM.py#L75 and set for the environment https://github.com/reiniscimurs/GDAE/blob/fc793eda8de23bed98ba3acd32908843c535510f/Code/GDAM_env.py#L67.

chih7715 commented 1 year ago

I'm not sure why my mobile robot keeps moving forward continuously. I printed the value of "linear" and it shows 0.35. Although it avoids obstacles, is this normal? I set the goal position nearby, but the mobile robot did not move towards the goal position.

chih7715 commented 1 year ago

I encountered such an error, and I don't know why it happened.

min_d = math.sqrt(math.pow(self.nodes[0][2] - self.odomX, 2) + math.pow(self.nodes[0][3] - self.odomY, 2)) IndexError: deque index out of range

reiniscimurs commented 1 year ago

Hi,

Looks like you do not have any nodes to evaluate. Either all of the nodes were reached or no nodes were generated in your implementation.

For the robot moving forward, I cannot say why is that, there is not enough information to go on. You should check what is the position of the currently selected node. All of the waypoints and the selected goal node should be visible in Rviz. If not, then you can print our the self.goalX and self.goalY values to see if they make sense.

chih7715 commented 1 year ago

I think it's an issue with my model. It seems that it triggered a freeze due to not moving forward, causing the output of linear = 0.35 when evaluating the self.last_states status in the recover process. May I ask to what extent your model training can reach?

reiniscimurs commented 1 year ago

What do you mean by "extent that the model can reach"?

chih7715 commented 1 year ago

"I trained a TD3 model and tested it in a simulated environment, which worked very well. However, when I applied it to the real world, its performance was very poor. It didn't move towards the target position and seemed to be wandering randomly. Is there anything I can adjust to improve its performance?

I would like to confirm what are the 24 inputs of the TD3 model, including 20 lidar scans and what else?

reiniscimurs commented 1 year ago

The state representation is explained here: https://medium.com/@reinis_86651/deep-reinforcement-learning-in-mobile-robot-navigation-tutorial-part3-training-13b2875c7b51

You can also see the state info here: https://github.com/reiniscimurs/DRL-robot-navigation/blob/main/TD3/velodyne_env.py#L229

For sim2real transfer there is a lot of things that can go wrong, so you would have to be very specific with what your setup looks like and how exactly you implemented it. Only then I give a guess what is happening there.