ChanganVR / RelationalGraphLearning

[IROS20] Relational graph learning for crowd navigation
132 stars 41 forks source link

Training different kinematics #5

Closed saaangooooo closed 3 years ago

saaangooooo commented 4 years ago

Hello, I tried to train with non-holonomic kinematics. I changed this line to "something"( it is judged whether "holonomic" or not in scripts. Also I run "python train.py --policy rgl" and got trained moel. However I tried to use the model to test with "python test.py --policy rgl --model_dir data/output --phase test --visualize --test_case 0", then the robot moved behind and could not reach its goal.

May I ask how should I do? I needed to change other parameters? Thank you.

ChanganVR commented 4 years ago

There are multiple reason why that might not work.

  1. Have you trained the policy with the default kinematics? If that works, you could simply change the action space setting at test time and it should work since value network is agnostic of action space configuration.

  2. You might need to change other parameters if you want to make the movement rotation-constrained.

In summary, tune the parameters using a policy trained with holonomic and train a non-holonomic model with the best set of parameters you find.

saaangooooo commented 4 years ago

Thank you @ChanganVR I'm going to try them.

Lyy369 commented 3 years ago

Hello, I tried to train with non-holonomic kinematics. I changed this line to "something"( it is judged whether "holonomic" or not in scripts. Also I run "python train.py --policy rgl" and got trained moel. However I tried to use the model to test with "python test.py --policy rgl --model_dir data/output --phase test --visualize --test_case 0", then the robot moved behind and could not reach its goal.

May I ask how should I do? I needed to change other parameters? Thank you.

I met the same problem. Have you solved the problem yet?

saaangooooo commented 3 years ago

@Lyy369 I tried some parameters, however I couldn't solve the problem. So I decided to train with "Holonomic" and follow @ChanganVR reply

you could simply change the action space setting at test time and it should work since value network is agnostic of action space configuration.

Unfortunately, it didn't work. Thus, I tried to use a velocity conversion trick like NH-ORCA.

It can work, however it's better to include constraints when training, so I'm trying to do something about it.