Closed nmtu4n closed 7 months ago
Hi,
Thank you!
I'm not sure what exactly you mean by 'only for distance to get to the goal', can you explain a bit more? You want to reward only movements which get you closer to the goal position or something else?
I want to reward based on yaw heading and movements which get robot closer to the goal position . I'll implement a reward algorithm that in turtlebot3 deep q learning example.
I was thinking carefully again, I couldn't train q learning with lidar, distance and yaw heading in the same time because it will create a lot of states. I have to train w deep q learing to achieve both avoiding obstacles and reaching goal. By the way, Could i ask how do you know K alpha and K Beta in a feedback control?
Thank you for your replay. Hopes you have a good day.
My approach is not deep Q-learning, its basic Q-learning with the Table. If you consider only yaw heading like that you have to be careful because sometimes avoiding obstacles with goal to reach final position can lead you to another obstacle (wall for example) and then you will actually not be closer to the goal but further. K alpha and K beta are tuned manually, you can play with that script, change values and see how robot reaches the goal position from initial, alpha and beta are chosen so path to goal is smooth.
Hi, your project is amazing. I want to improve a model by using your model and then set rewards only for distance to get to the goal. Do you think will it be possible?