pfnet / pfrl

PFRL: a PyTorch-based deep reinforcement learning library
MIT License
1.18k stars 158 forks source link

Implement optimize_hooks in agents #50

Closed keisukefukuda closed 4 years ago

keisukefukuda commented 4 years ago

In some cases, it'd be beneficial to have hooks in the trainer loop. PFRL already provides step_hooks and global_step_hooks in train_agent and train_agent_async respectively, but they are not for learners. In particular, the latter global_step_hook works in actors and there's no hook mechanism in learners.

This PR provides optimize_hook in agents under pfrl.agents package. Each agent takes optimize_hooks, which is a list of callable objects, as a parameter of their constructors. In the learner loop, the hooks are invoked just after optimizer.step() method is called.