davidhershey / feudal_networks

An implementation of FeUdal Networks for Hierarchical Reinforcement Learning as published : https://arxiv.org/abs/1703.01161
MIT License
178 stars 46 forks source link

Transition Policy Gradients #10

Open gyh75520 opened 5 years ago

gyh75520 commented 5 years ago

From the paper: in fact the properform for the transition policy gradient arrived at in eqn.10.

manager_loss = -tf.reduce_sum((self.r-cutoff_vf_manager)*dcos) ( from code ) why not implement the eqn 10.

biggzlar commented 5 years ago

Because the simpler, heuristic form in eqn. 7 is in fact the proper form of the more complex and (probably) less robust eqn. 10. Eqn. 10 is the gradient of a policy over states, instead of state-space directions.

Here's an intuition: We tell the agent to find a real world address. Eqn. 10 suggests intermittent addresses to help the agent find the final one - and the agent is rewarded every time they find one of the addresses suggested. Eqn. 7 suggests directions towards intermittent addresses and the agent is rewarded as soon as they follow the direction (so if the agent acts well, they get rewarded all the time, instead of sparsely).