jankrepl / deepdow

Portfolio optimization with deep learning.
https://deepdow.readthedocs.io
Apache License 2.0
903 stars 138 forks source link

tensorboard plots unwanted point after training ends #96

Closed turmeric-blend closed 3 years ago

turmeric-blend commented 3 years ago

Hi, I am running a setup like this

run = Run(network,
          loss,
          dataloader_train,
          val_dataloaders={'train': dataloader_train,
                           'test': dataloader_test},
          optimizer=torch.optim.Adam(network.parameters(), amsgrad=True),
          callbacks=[EarlyStoppingCallback(dataloader_name='test', 
                                           metric_name='loss',
                                           patience=15), 
                     ModelCheckpointCallback(folder_path='saved_model/run_1', 
                                             dataloader_name='test', 
                                             metric_name='loss'), 
                     TensorBoardCallback(log_dir='runs/run_1', 
                                         log_benchmarks=True)],
          device=device)

history = run.launch(n_epoch)

and it plots the training/test loss in tensorboard as expected.

However when training ends, it seems to generate an unwanted dummy/default folder _2020-11-23_10_16_52/1606097895.3641145 and plots it to tensorboard with 1 step (as if log_dir was not given a path I think) like so (in blue)

Screenshot from 2020-11-23 10-29-54

jankrepl commented 3 years ago

Nice find again:))

Yeh, I think I remember this issue. I think it has to do with hyperparamter logging. I will fix it asap:)

I think a quick fix is commenting out the following line

https://github.com/jankrepl/deepdow/blob/cab9cac9d9212dd839951f65a9c0b49ca961eec7/deepdow/callbacks.py#L626