lukovicaleksa / autonomous-driving-turtlebot-with-reinforcement-learning

Implementation of Q-learning algorithm and Feedback control for the mobile robot (turtlebot3_burger) in ROS.
109 stars 31 forks source link

Real world????????????????? #5

Closed kim-lux closed 2 years ago

kim-lux commented 3 years ago

Hello, I'm glad the simulation worked fine before. But now, when I try to run it as a turtlebot in a real environment, it keeps crashing. The wheels don't stop and keep spinning even when bumped. Do I need Gazebo or Rvis even when running in a real environment? Do I need to upload a map? It's an honor to do your project.

lukovicaleksa commented 3 years ago

For real world you only have to run control_node, did you set this flag to true? REAL_ROBOT = False

There is no need for the map.

kim-lux commented 3 years ago

Yes I set REAL_ROBOT = False to True. Feeback and Q-learning are also working, not sure. And how do I set the destination? Is it arbitrarily measured from the origin reference point? Which Raspberry Pi version did you use? I even adjusted the angle and distance in control_node! Thank you for answer.

lukovicaleksa commented 3 years ago

The initial position is always (0,0,0), the odometry measurement always starts from this position when you restart the robot, but continues from last value if you dont. The initial and goal positions are set in this IF statement:

if REAL_ROBOT: X_INIT = 0.0 Y_INIT = 0.0 THETA_INIT = 0.0 X_GOAL = 1.7 Y_GOAL = 1.1 THETA_GOAL = 90

Rasppberry Pi used is i think Pi 3 Model B, according to the turtlebot specifications used at my school.

lukovicaleksa commented 3 years ago

Have you checked are you getting valid information from Lidar and Odometry on topics?

kim-lux commented 3 years ago

Yes, I found that Lidar works fine using Scan_node.py. And Odometry is showing the right value. The Raspberry Pi version is the same. Turtlebot avoids the first obstacle, but runs straight to the second obstacle and hits it. I'm not sure what the problem is. Thank you for your kind reply as always.

lukovicaleksa commented 3 years ago

You have to understand that this Q-learning with Q-table is not so powerful and will fail for some situations that has not seen in the training process, you may consider re-training the robot in different environments and use that new Q-table for your particular problem.

kim-lux commented 3 years ago

I understand that part, but I don't know why the Turtlebot's wheels keep moving after they crash. Let's learn a little more in a different environment. I wonder if there were many failures when shooting YouTube. Thank you!

lukovicaleksa commented 3 years ago

The robot should definitely stop when approaches to obstacle closer than the preset distance, but only if the obstacle is +-75 degrees from the robots moving direction, is that case for you? The shooting for YouTube went smoothly :D but with carefully designed obstacles. Your welcome, I hope you will make it work!

kim-lux commented 3 years ago

Still not running. But your advice has been very helpful. Thank you. Did you modify the code to 16.04->20.04 last time and run it in Real Robot?

lukovicaleksa commented 3 years ago

Yes, but I have not tested it on the real robot because I have no access to the robot anymore. Did you pull the latest version of code from repo?

kim-lux commented 3 years ago

Yes I am using the most recent version. Early versions failed even in simulations. But the latest version is perfect for simulation.

lukovicaleksa commented 3 years ago

Then I have no idea what could go wrong with the real robot, try to log/print everything relevant (odometry, lidar, q-values, actions) to see maybe what is wrong.

kim-lux commented 3 years ago

In my opinion the only difference currently is between 20.04 and 16.04. Code now crashes in 16.04 simulation. Can I get the data for the 16.04 version used by YouTube?
Feedback-learning is not running for 16.04 simulations in current code.

This is probably my last attempt.

thank you always

lukovicaleksa commented 3 years ago

Oh im sorry but i have overwritten it with the new one. Can you perform a learning phase again for the 16.04?

kim-lux commented 3 years ago

There seems to be a problem with the script code you uploaded in July. I can't figure out which code is the problem. We combined data uploaded in July and scripts uploaded in March. As a result, 16.04 Ubuntu Simulation was successful. I don't know if this will work in the real world, but I'll give it a try this week and let you know.

lukovicaleksa commented 3 years ago

Hope you succeed!