lukovicaleksa / autonomous-driving-turtlebot-with-reinforcement-learning

Implementation of Q-learning algorithm and Feedback control for the mobile robot (turtlebot3_burger) in ROS.
109 stars 31 forks source link

How to run this code? #3

Closed kim-lux closed 3 years ago

kim-lux commented 3 years ago

I am interested in this video. And I just want to follow this . But I'm a beginner of ROS. So I'cant begin this git project. When I run this project, gazebo simulation is fail (turtlebot is crashed) I think this is because I just run each python file. (like use this command to run scan_node.py $ python scan_node.py) Can you explain how can I run this whole file. (like rqt graph) I think I must use roslaunch command but I don't know how can i use this command.

could you give me a code when you start this simulation in gazebo?

lukovicaleksa commented 3 years ago

In order to run ROS nodes you need to use rosrun command, roslaunch is for running the ROS packages like Gazebo Simulator. I suggest you follow this tutorial on ROS Wiki if you want to learn the basics of ROS: http://wiki.ros.org/ROS/Tutorials After you learn the basics of ROS architecture, only then you can start building some projects.

On Wed, Jul 7, 2021 at 1:06 PM kim-lux @.***> wrote:

I am interested in this video. And I just want to follow this . But I'm a beginner of ROS. So I'cant begin this git project. When I run this project, gazebo simulation is fail (turtlebot is crashed) I think this is because I just run each python file. (like use this command to run scan_node.py $ python scan_node.py) Can you explain how can I run this whole file. (like rqt graph) I think I must use roslaunch command but I don't know how can i use this command.

could you give me a code when you start this simulation in gazebo?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/lukovicaleksa/autonomous-driving-turtlebot-with-reinforcement-learning/issues/3, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARMW6ELKNNC3ON5ZTSZ3SGTTWQYKJANCNFSM476NXLJQ .

kim-lux commented 3 years ago

we already study basics of ROS architecture and we run this project with this code rosrun master_rad control_node.py When we use 0, 2 path, we are success but the other path we are failed When you process this project , Did you success all the pass? thank you for your response l am honored to study this project

lukovicaleksa commented 3 years ago

I cant really tell you from here what exactly the problem is. That node is doing the basic feedback control, with no Q-learning included. Can you tell me what the exact problem is? Does robot spawns to the proper position? Does robot reaches the goal position? Note this: You cant expect the robot to avoid obstacles using this node, it is only the pure feedback control without using the Lidar information, i used it in the RViz blank world just to test the feedback control.

On Wed, Jul 7, 2021 at 2:39 PM kim-lux @.***> wrote:

we already study basics of ROS architecture and we run this project with this code rosrun master_rad control_node.py When we use 0, 2 path, we are success but the other path we are failed When you process this project , Did you success all the pass? thank you for your response l am honored to study this project

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lukovicaleksa/autonomous-driving-turtlebot-with-reinforcement-learning/issues/3#issuecomment-875568594, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARMW6EM2TKMAYJVAS7PDONTTWRDIVANCNFSM476NXLJQ .

lukovicaleksa commented 3 years ago

oh im sorry i-ve read it wrong, the feedback_control_node does what i just said, the control_node includes the Q-learning and should avoid obstacles. Yes, i succeeded in all paths ofcourse. So can you tell me what is the exact problem for you? The robot is not avoiding obstacles properly?

On Wed, Jul 7, 2021 at 3:42 PM Aleksa Lukovic @.***> wrote:

I cant really tell you from here what exactly the problem is. That node is doing the basic feedback control, with no Q-learning included. Can you tell me what the exact problem is? Does robot spawns to the proper position? Does robot reaches the goal position? Note this: You cant expect the robot to avoid obstacles using this node, it is only the pure feedback control without using the Lidar information, i used it in the RViz blank world just to test the feedback control.

On Wed, Jul 7, 2021 at 2:39 PM kim-lux @.***> wrote:

we already study basics of ROS architecture and we run this project with this code rosrun master_rad control_node.py When we use 0, 2 path, we are success but the other path we are failed When you process this project , Did you success all the pass? thank you for your response l am honored to study this project

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lukovicaleksa/autonomous-driving-turtlebot-with-reinforcement-learning/issues/3#issuecomment-875568594, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARMW6EM2TKMAYJVAS7PDONTTWRDIVANCNFSM476NXLJQ .

kim-lux commented 3 years ago

Sorry for the late reply. We run control_node.py via rosrun but everything fails except path 0,2 as pictured. I'm not sure why this is wrong. is there another important code for run this program? we just use rosrun master_rad control_node.py. Please let us know if we are doing anything wrong. thanks for the reply path4 path1

lukovicaleksa commented 3 years ago

Yes, that's how you run the code, nothing else needs to be done. Just choose the path at the top of the .py file and it should work. Which version of ROS and Linux are you using? By the look of your screen i guess you are using newer version of Linux and and ROS probobly, i used Linux 16.04 and ROS Kinetic Kame on the virtual machine. Everything should work fine, the version of code posted here is the final version that worked for me.

kim-lux commented 3 years ago

Thanks for your kind reply. I'll check it out and post back.

kim-lux commented 3 years ago

I ran it with 16.04 kenetic and still got the same result. Having trouble running it in a VM? thanks for the response Path4

lukovicaleksa commented 3 years ago

Then im afraid i dont know from here what the problem is, i will try to debug these days when i get time. Hope i will solve the problem and answer you as soon as possible.

lukovicaleksa commented 3 years ago

Fixed everything, scripts had few bugs that i left while i was testing. I did one more learning phase and acquired new Q-table which is stored in Log_learning_FINAL folder. Everything works fine now. One note: beware of the feedback controller parameters and goal distance and angle threshold, reaching goal depends a lot on these. Please let me know if all paths work for you now, so i can close the issue.

kim-lux commented 3 years ago

Thanks for your hard work. But unfortunately I can't seem to get Q-learning to work properly in this code. The terminal says Q-learniing is being applied, but there are many times when the direction change does not occur. Could you please let me know what the problem is? Now it crashes on all paths. Thank you always.

image

lukovicaleksa commented 3 years ago

I have migrated my PC fully to Linux in the meantime. Now I am using Ubuntu 20.04 with ROS noetic. PARAMETERS

kim-lux commented 3 years ago

Finally succeeded. All thanks to you. Thank you very much. It worked because I matched the version. Thank you very much.

lukovicaleksa commented 3 years ago

Your welcome! Im glad it works now