Closed Carlz182 closed 3 years ago
Hi, I'm a bot from the Ray team :)
To help human contributors to focus on more relevant issues, I will automatically add the stale label to issues that have had no activity for more than 4 months.
If there is no further activity in the 14 days, the issue will be closed!
You can always ask for help on our discussion forum or Ray's public slack channel.
Hi again! The issue will be closed because there has been no more activity in the 14 days since the last message.
Please feel free to reopen or open a new issue if you'd still like it to be addressed.
Again, you can always ask for help on our discussion forum or Ray's public slack channel.
Thanks again for opening the issue!
I was following the example script for a custom loss function. I am interested in bootstrapping my policy with a dataset from an algorithmic supervisor. However, the demonstrations are not perfect so at some point I would like the influence of the imitation loss to decrease to zero and let the policy loss take over. In the example the influence of the imitation loss is hard coded with 10 but I would like to make it change over time.
Unfortunately I did not find a way to change the model's parameters from outside during training. What would be the suggested way to do something like this? I was thinking about doing this in the train_result callback but I did not succeed.
An alternative would be a behavioral cloning function similar to the pre_train function in baselines but I did not find any reference if that is already implemented somewhere.
I am using PPO for training.
Rllib version: 0.8.2, Python 3.7.6, Ubuntu 18.04