Closed truongnmt closed 6 years ago
You might add a line
.call(lambda _, v: print(v[-1]), v=V('loss_history'))
before run(batch_size=BATCH_SIZE,...)
.
It will print a loss function value at each iteration.
Thanks a lot, it worked!!! And btw, it has just finished 1000 epochs 🔥 🔥 🔥
@truongnmt How long did it take you?
@emadahmed97 took me about 8 or 9 hours dude
I'm following this tutorial for detecting Atrial fibrillation but when run
Training pipeline
it take so much time.I'm using Tesla K80 and I leave it ran all night, more than 7 hours, but now it's still running. In this block it's running 1000 epochs:
Do you thing that we have smt not right here? Or does the framework have something indicate that it's running, maybe print the number of current epoch it's running? And btw when I run I see this in terminal FYI: