TempleRAIL / drl_vo_nav

[T-RO 2023] DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles
https://doi.org/10.1109/TRO.2023.3257549
GNU General Public License v3.0
151 stars 11 forks source link

About the Lidar Data preprocessing #27

Closed lehoangan2906 closed 4 months ago

lehoangan2906 commented 5 months ago

Hi, I want to thank you and your team for your hard work in this amazing project. I'm currently reviewing your paper and code. I'm not quite understand how to get 1x80x80 dimensional lidar map from 20x720 Min & Avg pooled data from the paper. When I read your code to understand more, I'm struggling to find these preprocessing functions. Can you explain this further to me, and if convenient, can you guide me through your code structure (about how things are organized)? Again, I am very appreciate your time and effort you put into this project!

zzuxzt commented 5 months ago

Thank you for your interest in our work. Sorry, I have been busy with some personal issues these days.

The data preprocessing code is in the cnn_data_pub.py. The network structure code is in the custom_cnn_full.py. The training code is in the drl_vo_train.py. The navigation code is in the drl_vo_inference.py.

Hopefully these simple descriptions can give you better guidance. I will restructure the code later when I am less busy.

lehoangan2906 commented 5 months ago

Hi zzuxzt, thank you for your prompt reply. Do you mind if I ask you another question?

When I set up the turtlebot2 as your README.md, I encountered the following issue:

"The following packages have unmet dependencies: ros-noetic-joystick-drivers : Depends: ros-noetic-ps3joy but it is not installable Depends: ros-noetic-wiimote but it is not installable E: Unable to correct problems, you have held broken packages."

I have been searching for the solutions for several days but still don't know how to resolve it. Do you have any advice?

I'm very appreciate your guidance with the code structure and I will analyze them further soon. Wish you do well with your work and I'm looking forward for your reply 😁

zzuxzt commented 5 months ago

You can try to remove the ros-noetic-joystick-drivers package since the DRL-VO did not use it. I guess the reason is that the packages ps3joy and wiimote cannot support Noetic as mentioned in here.

lehoangan2906 commented 5 months ago

I'm very appreciate your help. Can I contact to ask you more if I encounter more problem?

zzuxzt commented 5 months ago

Feel free to post your questions. I will answer your questions when I am free.

lehoangan2906 commented 4 months ago

Hi @zzuxzt, I'm currently reading the Simulation configuration and Hardware configuration sections in your paper. I want to ask you that why didn't you both train the DRL network and run the simulations on the DGX-1 server, but split into server specific task and desktop specific task? And if there are someway to train the model without the need to activate Rviz or Gazebo, can you please guide me?

Another point I noticed is that between the training/simulation phase and the real-life hardware testing phase, you used different Ubuntu and ROS versions. Is that optional or is there any specific reason for you to do so? Because I assume that there will be conflicts between the versions.

Again, thank you for your time and kindness. I hope to get your reply soon! Screenshot 2024-07-03 at 9 01 57 AM Screenshot 2024-07-03 at 9 02 21 AM

lehoangan2906 commented 4 months ago

Beside, I think you should update the turtlebot2 installation file as some of the packages are no longer supported: https://raw.githubusercontent.com/zzuxzt/turtlebot2_noetic_packages/master/turtlebot2_noetic_install.sh 😁

zzuxzt commented 4 months ago

Hi @zzuxzt, I'm currently reading the Simulation configuration and Hardware configuration sections in your paper. I want to ask you that why didn't you both train the DRL network and run the simulations on the DGX-1 server, but split into server specific task and desktop specific task? And if there are someway to train the model without the need to activate Rviz or Gazebo, can you please guide me?

Another point I noticed is that between the training/simulation phase and the real-life hardware testing phase, you used different Ubuntu and ROS versions. Is that optional or is there any specific reason for you to do so? Because I assume that there will be conflicts between the versions.

Again, thank you for your time and kindness. I hope to get your reply soon! Screenshot 2024-07-03 at 9 01 57 AM Screenshot 2024-07-03 at 9 02 21 AM

  1. The direct reason is that the DGX-1 server does not support a GUI, which makes it difficult to run simulations. More importantly, if your algorithm only works well on the same machine it was trained on, then your algorithm has limited generalization capabilities and is not a good algorithm. Computer machines should not be a factor influencing algorithms.

  2. You can easily find many other python-based simulators without using the Gazebo or rviz. For example, you can find many interesting works that do not use Gazebo in the second part of my paper, "Related Works".

  3. The difference in versions is due to the fact that the robot computer TX2 only supports Ubuntu 18.04. This is also a tricky thing in hardware experiments, where you need to deal with compatibility issues between different versions.

zzuxzt commented 4 months ago

Beside, I think you should update the turtlebot2 installation file as some of the packages are no longer supported: https://raw.githubusercontent.com/zzuxzt/turtlebot2_noetic_packages/master/turtlebot2_noetic_install.sh 😁

Thanks for your reminder. I will find time to update it since it was created a few years ago and many packages have changed.

lehoangan2906 commented 4 months ago

I appreciate your help!