d1024choi / traj-pred-irl

Official implementation of "Regularizing neural networks for future trajectory prediction via IRL framework" published in IET CV
MIT License
32 stars 6 forks source link

About the component of inverse reinforcement learning #2

Closed momo1986 closed 4 years ago

momo1986 commented 4 years ago

Hello, I am very interested in your idea that inverse reinforcement learning is used for regularization.

Thanks for sharing.

Could you point out that where is the module for RL?

I want to understand your idea better via source code.

Thanks & Regards!

d1024choi commented 4 years ago

Hello, I am very interested in your idea that inverse reinforcement learning is used for regularization.

Thanks for sharing.

Could you point out that where is the module for RL?

I want to understand your idea better via source code.

Thanks & Regards!

Thank you for your interest and sorry for my late response.

You can find in 'sdd_model.py' or' crowd_model.py' that, during the past trajectory and scene context encoding process, reward values (one from the ground-truth past position and the other from the estimated position) are calculated. The calculated reward values are used to train both rnn encoder (for trajectory prediction) and reward function.