Open yahuvi opened 7 years ago
Theano is very slow in compiling computational graphs for this model because the architecture is non-trivial. You can put theano flags optimizer=fast_compile to run it. The run-time is relatively faster because both the model and dataset are small.
THEANO_FLAGS=optimizer=fast_compile,device=gpu,floatX=float32 python nndial.py -config config/tracker.cfg -mode train
also:
including informable tracker loss ...
including informable tracker loss ...
including informable tracker loss ...
including requestable tracker loss ...
including requestable tracker loss ...
including requestable tracker loss ...
including requestable tracker loss ...
including requestable tracker loss ...
including requestable tracker loss ...
including OfferChange tracker loss ...
gradient w.r.t inftrk
gradient w.r.t reqtrk
I use centos 7.5 K40
start work: number of parameters : 1103292 number of training parameters : 1096842 start network training ... Finishing 25 dialog in epoch 1 thanks to shawnwun
Found the example_run, sorry!
I came to the same problem.
The program starts to train by suppling THEANO_FLAGS="optimizer=fast_compile"
.
I use default config and run the tracker training on macOS: python nndial.py -config config/tracker.cfg -mode train
logs below:
issue: Program is blocked here,the log is no longer printed. Apple Activity Monitor status:CPU is 98%,Memory is 15.56GB.