Open eightreal opened 8 months ago
:octocat: cibot: Thank you for posting issue #2528. The person in charge will reply soon.
Another question. When I call the run interface and save the model, do I also save the current training status (such as gradient information)? Is it possible to continue training after the model is loaded in the future.
Hello! Thank you for your question and concern. Here're my answer on your questions: First, you can save the model after training. However, it does not support to save the gradient information. Second, Yes. it is possible to continue training after the model is loaded.
You can do checkout and continue training process, but that's just not based on gradient saving. You can do epoch-based checkpointing (that's what most nntrainer's mobile applications do), but I'm not sure about finer-grained checkpointing.
ok, thanks for your reply , another question , is there any method for a model copy and Polyak update?
For model copy, if there is no copy-constructor for model class and the default behavior does not do what you want, you may try "original.save()" and "cloned.load()".
For Polyak update, it appears that the DQN application (or simple "reinforcement learning" app) has its own "custom" op. But I'm not too sure about this. I guess @jijoongmoon may answer this when he returns from trip.
Hello, I checked the reinforcement learning app , you update the net by save file and load file, but not polyak update , could you help check it ? And if if there is a impl of polyak update, could you help clear its path and code line?
Hello , Dear Contributors I notice that the application DQN don't use the api .h file. And there only exists defined loss function, so if I want to develop a DQN methods, I would like to ask you to confirm the following.
Or you have better advice.