reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
571 stars 119 forks source link

Four random box #44

Closed Darkhan92 closed 1 year ago

Darkhan92 commented 1 year ago

Hello, dear Reinis Cimurs,

First of all, thank you for publishing such great work. I want to use this model for our own custom robot as a base.

Here are some problems I have encountered while running the simulation. When I run it, it works fairly well, it has to circle around the problem but I have solved it by changing the seed number. But another issue is it shows error that it cannot load four cardboard boxes, saying this box does not exist. So Rvis and Gazebo loading with plain empty space, without boxes as they should be. What is the problem can be for that, in python code I did not found where happens this problem while I saw where we define the random box function. image

Here is where it is defined but still I did not see where it is defined walls and boxes as in picture

image

image

image

reiniscimurs commented 1 year ago

Hi,

Which repo version (branch) did you use? The issue is that the correct world environment is not loaded. The environment should not be empty and should have obstacles and boxes in it. This could happen if something is not sourced properly.

Let me know the full terminal output when you start the training script. Also did you run the full sourcing commands as per the tutorial?

Darkhan92 commented 1 year ago

How to understand correct repo version? I could not run the simulation on my base python environment. Python did not see Ros packages module. Then if I included to the python path Ros pkg then terminal did not see numpy module.

So then I could run on dedicated python environment. I exported everything except "source ~/. Bashrc" command as when I do this it exit's from dedicated python environment to the base environment. I will show you tomorrow all the commands that I do before running the code and the terminal output.

reiniscimurs commented 1 year ago

This repo currently has 3 branches, all are slightly different. When I asked about the version, I meant which branch was used as sometimes there can be discripancies there.

Check if answers in this issue would not solve your problem: https://github.com/reiniscimurs/DRL-robot-navigation/issues/30 Usually it would be enough to just install all the missing packages to your base environment. If ros packages is missing, then install them with pip3 and the same for numpy (or any other relevant packages that might be missing). If using virtual environment, you will have to make sure you are sourcing the locations of the repo, where to look for the files.

Darkhan92 commented 1 year ago

Here are the commands. I have installed ROS noetic, ubuntu 20.0, python 3.8. image image image image

this is the commands that I am running

image

reiniscimurs commented 1 year ago

Looks like you are exporting the wrong gazebo resource path. These are absolute paths. It looks like your repo is stored in ~/Desktop/MOB_robot_RL So your gazebo resource path export command should be: GAZEBO_RESOURCE_PATH= ~/Desktop/MOB_robot_RL/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch

Try it out and see if it works.

This issue is explained in https://github.com/reiniscimurs/DRL-robot-navigation/issues/30 as well as setting up the paths is explained in the tutorial: https://medium.com/p/d62715722303

Darkhan92 commented 1 year ago

Here I corrected the path to the GAZEBO_RESOURCE_PATH but still it cannot find it image image image

reiniscimurs commented 1 year ago

Did you try with export GAZEBO_RESOURCE_PATH= ~/Desktop/MOB_robot_RL/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch ?

Darkhan92 commented 1 year ago

yes, I have tried but it does not help. When I try to export GAZEBO_RESOURCE_PATH= ~/Desktop/MOB_robot_RL/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch it has an error in the model state in rvis. But when I try export GAZEBO_RESOURCE_PATH=~/home/issai/Desktop/MOB_robot_RL/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch it works but still the problem is cardboard_box does not exist.

reiniscimurs commented 1 year ago

I can easily reproduce this issue by not setting or setting the wrong GAZEBO_RESOURCE_PATH which leads me to believe that the path to your repo is not set up quite right.

What you can do is to check GAZEBO_RESOURCE_PATH value before and after you execute the export command.

  1. Open a new terminal and execute echo $GAZEBO_RESOURCE_PATH and note the result

  2. Execute the other export commands : export ROS_HOSTNAME=localhost export ROS_MASTER_URI=http://localhost:11311 export ROS_PORT_SIM=11311 export GAZEBO_RESOURCE_PATH= ~/Desktop/MOB_robot_RL/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch source ~/.bashrc source devel/setup.bash

  3. Then execute the echo $GAZEBO_RESOURCE_PATH and note what the output is again

Darkhan92 commented 1 year ago

This source path is obtained throw the folder properties like this and works only if I specify as next: image image otherwise, if I do as you suggest there is a problem occurring while I am running and RVis simulation is frozen. Sorry for long questions and thank you for your patience

Darkhan92 commented 1 year ago

When I do the first time "echo $GAZEBO_RESOURCE_PATH" then it shows empty space, after exporting all the things then it shows us the above source path. I am also not so experienced with ROS and Gazebo, so for this reason maybe for me little troublesome to identify a problem

reiniscimurs commented 1 year ago

In the Gazebo resource path you can see you have repeating /home/issai part. That is what seems to be causing the issue.This happens because you are exporting ~/home/issai/Desktop/MOB_robot_RL/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch The ~ sign already specifies home and user and is equal to home/issai/ So what you end up actually exporting is /home/issai/home/issai/Desktop/MOB_robot_RL/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch

So you should export either /home/issai/Desktop/MOB_robot_RL/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch or ~/Desktop/MOB_robot_RL/DRL-robot-navigation/catkin_ws/src/multi_robot_scenario/launch with the ~ included

See what the echo $GAZEBO_RESOURCE_PATH says about these exported paths and whether it is equal to the path you need

Darkhan92 commented 1 year ago

Thank you very much you helped a lot. Finally, it worked for me. Thank you!!!

reiniscimurs commented 1 year ago

Great that you managed to get it working!

Darkhan92 commented 1 year ago

Hello, I have a question about training and I also found a paper that you used depth camera. Goal based obstacles avoidance , how it is related to the DRL GitHub? Because in DRL robot navigation where it uses velodine lidar sensor?

reiniscimurs commented 1 year ago

The basic architecture of that paper is similar to this github repo, but is generally unrelated. It was implemented in tensorflow with sequential depth image inputs with a DDPG model training architecture.

Learning from distances directly is easier and a lot more lightweight. The paper you mention had a very large network in the end that was quite hard to deploy to a real robot.

Darkhan92 commented 1 year ago

With the hyperparameters that were set in the GitHub repository how many epochs are required to get some reliable goal-reaching and acceptable obstacle avoidance? Because I did not change anything with parameters except I put gamma =0.9 and run the training process it reached 20 epochs but still, it does not learn to avoid obstacles and reach goals. So what were the optimal learning rate and some important hyperparameters and it is optimal values? Thank you!

Darkhan92 commented 1 year ago

In the end, I am going to change the robot's kinematics (it will not be a turtle bot but a two-wheeled kind of segway robot) and also going to use a depth camera instead of a Velodyne lidar sensor. So assume that I will need to use also convolutional neural networks to recognize obstacles

reiniscimurs commented 1 year ago

I would assume a larger gamma might work better, as you would have a longer future window that would impact each decision. To me the default hyperparameters in this repo work quite fine, but there is some randomness involved so you might want to try out setting different seed values. Generally, you should start seeing collision avoidance behavior between 20 to 40 epochs.

If you have a working robot model, it should not be too difficult to change the robot. For depth images though, be vary of the field of view as a single camera tends not to pick up obstacles in robots periphery.

Darkhan92 commented 1 year ago

Hi Reinis,

How are you?

So finally mobile robot right now works pretty well.

As you remember I told that I want to change robot model itself and I couldn't find distinguishable place where robot model dynamics and kinematics are defined. I understand that it is within Gazebo environment but still did not find explicit location. I am new with Ros and Gazebo.

So where is it located and where I can change it, thank you beforehand!?