Previously, if you did a RunOnce (even on random data) before a RunAndBackward, it would no longer be a firstRun and you could send batches as you wish. So you could never learn some extra dnn. Now you can not.
All dnns must have paramBlobs initialized to run solver->Train() for all of them (at least RunOnce must be completed for each dnn for this to happen).
The solver->Train() must run for all dnns because all dnns must have the same paramBlobs in each epoch.
Previously, if you did a
RunOnce
(even on random data) before aRunAndBackward
, it would no longer be afirstRun
and you could send batches as you wish. So you could never learn some extra dnn. Now you can not.All dnns must have
paramBlobs
initialized to runsolver->Train()
for all of them (at leastRunOnce
must be completed for each dnn for this to happen).The
solver->Train()
must run for all dnns because all dnns must have the sameparamBlobs
in each epoch.