Closed mertalbaba closed 3 months ago
Hi! we just setup the docs explaining how to do the training and reproduce the results: https://loco-mujoco.readthedocs.io/en/latest/source/tutorials/imitation_learning.html
Let me know if you need further clarification!
Yep, I found it, thanks anyways. However what I was asking is the 'reproducible results' mentioned. Because no results are reported neither on your paper nor in the repo. Are there any results that are already obtained by you by training IL methods?
Yes, all "perfect" datasets were acquired using the imitation learning algorithms with the parameterization provided in the examples. So by running the experiments in the examples, you should be able to get something as good as the "perfect" dataset.
Ok, do you have a benchmark for separate imitation learning methods? Or did you just use one to obtain the dataset?
As of now, we only provide a benchmark for Gail and Vail. We are working on providing results for the other methods available in the imitation learning library as well. I will keep this issue open until we provide the rest of the results.
Thanks! Where can we find the benchmark for GAIL and VAIL?
You have to run the launcher files for the imitation learning experiments explained here: https://github.com/robfiras/loco-mujoco/tree/master/examples/imitation_learning
These will create for each environment a logging directory, containing numpy files with the undiscounted return (Eval_R), the discounted return (Eval_J) and the episode lenght. Also tensorboard files are created containing more information if needed. Just run:
tensorboard --logdir /path/to/logs
Hi! How can I reach your baseline results for benchmarking?