Closed ConstantinosM closed 5 years ago
Hello! So this repository implements Hierarchical Actor-Critic (HAC), which is designed specifically for continuous state and action space environments. The other algorithm we introduced, Hierarchical Q-Learning (HierQ), is the discrete state and action space version of HAC and is the algorithm we used for the grid world tasks. I have attached the HierQ code for training agents with 1 (flat), 2, and 3 levels in a four rooms environment. To train a 3-level agent for instance, move to the “HierQ_3_Levels” folder, and enter the command “python3 initialize_HAC --retrain --mix”. After the agent has finished training you can watch the trained agent by entering the command, “python3 initialize_HAC.py --test --show”. At some point I will add this HierQ code to the repo.
Thank you so much for the prompt response and for sharing the code!!!
Hi! Thank you for sharing your work! In the paper you are mentioning that you tested the HAC in a discrete gridworld environment. I assume that you do not need Mujoco for this. Is the code you are sharing here compatible with discrete gridworlds as well?