AliiRezaei / turtlebot3_rl

just testing reinforcement learning on the turtlebot3!
42 stars 2 forks source link

how are you setting a goal point? #1

Open abdul-mannan-khan opened 1 month ago

abdul-mannan-khan commented 1 month ago

Helly @AliiRezaei ,

Nice work. Thank you so much for sharing it. I am really interested particularly due to your work in C++. I am just wondering if we change the algorithm from discrete actions to continue actions what will happen by changing the algorithm from Q learning to SAC? Any thoughs please? Thank you so much for your response.

AliiRezaei commented 2 weeks ago

Hello Abdul Mannan Khan, Thank you for your kind words.

Switching from Q learning (with discrete actions) to Soft Actor Critic (SAC) for continuous actions would allow for smoother and more precise control, especially in complex environments. SAC is well-suited for continuous control tasks and could enhance the robot's performance. However, implementing SAC in C++ would be more complex and require integrating neural networks.

Also, regarding your question about goal point selection, I should clarify that your chosen goal must exist within the state space. To select an arbitrary goal point, you can create a grid on the x-y axis with appropriate precision.