Closed odats closed 5 years ago
Save of the neural network is not added in this example, but fairly straightforward thing to do. To get started you can check sample from the book here:
Thank you for the support.
On May 7, 2019, at 12:14, Max Lapan notifications@github.com wrote:
Closed #24 https://github.com/Shmuma/ptan/issues/24.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Shmuma/ptan/issues/24#event-2323546815, or mute the thread https://github.com/notifications/unsubscribe-auth/AAHGR6YRCFW3OYNKY34A3VLPUFB7NANCNFSM4HKYLXTQ.
I want to train DQN agent and run Pong env to see how the agent plays.
After successful completion 05_new_wrappers.py I can find only event logs files: events.out.tfevents.1556915999.ip-172-31-42-166
Where to find and how to restore dqn_speedup model?