Closed rohanchn closed 1 year ago
I had to rewrite the progress bar code completely because the lightning internals of it changed considerably and it was fairly deep in there. I'll see if I can find an easy way to keep it from being overwritten.
It's particularly useful to quickly compare models in an ongoing training session, but not something absolutely critical. Plus, I am not sure whether everyone finds it useful, so you may want to let it be if it's too much work.
Just want to thank you for your amazing work!
Has the training log output that looked like the snippet below been discontinued in
4.3.9
?stage 6/∞ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23621/23621 0:00:00 0:13:10 val_accuracy: 0.93735 early_stopping: 0/10 0.93735 stage 7/∞ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23621/23621 0:00:00 0:13:11 val_accuracy: 0.93994 early_stopping: 0/10 0.93994 stage 8/∞ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23621/23621 0:00:00 0:13:14 val_accuracy: 0.94252 early_stopping: 0/10 0.94252
I'm training a recognition model and each new epoch is overwriting the output of the previous epoch. It works perfectly well in -v mode.
Like this:
Total params: 4.0 M Total estimated model params size (MB): 16 stage 1/∞ ━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 712/25483 0:00:20 • 0:11:35 35.65it/s v_num: 0 val_accuracy: 0.74 early_stopping: 0/10 0.74048