OpenRobotLab / HIMLoco

Learning-based locomotion control from OpenRobotLab, including Hybrid Internal Model & H-Infinity Locomotion Control
https://junfeng-long.github.io/HIMLoco/
Other
211 stars 22 forks source link

_reward_power_distribution() #5

Closed hdShang closed 2 months ago

hdShang commented 2 months ago

'LeggedRobot' has no attribute named "_reward_power_distribution".

Junfeng-Long commented 2 months ago

Sorry, it is for a new work, which computes the average power of the left part and right part of the robot in 2 seconds, and use the difference of them as penalization. Just delete it.

hdShang commented 2 months ago

Sorry, it is for a new work, which computes the average power of the left part and right part of the robot in 2 seconds, and use the difference of them as penalization. Just delete it.

Well, thank you! Looking forward to your new job!

hdShang commented 2 months ago

When I deleted the power_distribution item, I found that the reward scales of a1, go1 and aliengo were the same. So I'm wondering if I need to modify the reward scale when I use this code to train other dogs (bigger or smaller).

Junfeng-Long commented 2 months ago

You may need to. But at least you should change clearance_height_target to adjust to the leg length of your dogs, which indicates the target height of feet with respect to the dog's body. You may also need to modify the limits reward to make the dog start to walk easier in the early stage of training. Other reward scales may also need to be modified accoring to the situation. If you have questions during your training, please do not hesitate to share with me, I will try my best to help.

hdShang commented 2 months ago

You may need to. But at least you should change clearance_height_target to adjust to the leg length of your dogs, which indicates the target height of feet with respect to the dog's body. You may also need to modify the limits reward to make the dog start to walk easier in the early stage of training. Other reward scales may also need to be modified accoring to the situation. If you have questions during your training, please do not hesitate to share with me, I will try my best to help.

When I trained go1 and a1, I found that their maximum terrain level was about 4. And when i play the trained model, the robot can only traverse the second difficulty level of stairs. I reduced the scale of dof acc, joint power, action rate and smoothness, but it didn't work. How should I modify the reward to make the robot traverse through more difficult terrain?

Junfeng-Long commented 2 months ago

You may need to. But at least you should change clearance_height_target to adjust to the leg length of your dogs, which indicates the target height of feet with respect to the dog's body. You may also need to modify the limits reward to make the dog start to walk easier in the early stage of training. Other reward scales may also need to be modified accoring to the situation. If you have questions during your training, please do not hesitate to share with me, I will try my best to help.

When I trained go1 and a1, I found that their maximum terrain level was about 4. And when i play the trained model, the robot can only traverse the second difficulty level of stairs. I reduced the scale of dof acc, joint power, action rate and smoothness, but it didn't work. How should I modify the reward to make the robot traverse through more difficult terrain?

Please change the max_curriculum to a lower value, increase the proportion of your desired terrain may also help.