Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.26k stars 3.38k forks source link

Test results not logged to tensorboard, since 0.7.3, this worked in 0.7.1 #1447

Closed WSzP closed 4 years ago

WSzP commented 4 years ago

🐛 Bug

Test results are not logged to TensorBoard. With the exact same code, version 0.7.1 logged them flawlessly. Also, with the exact same code, validation and train results are logged. So I assumed the issue is with the test.

To Reproduce

Run test() step with a model that has TensorBoard logging. logger = TensorBoardLogger(LOG_DIR, name=NAME)

Code sample

def validation_step(self, val_batch, batch_idx):
    [...]
    return {'val_loss': loss}

def validation_epoch_end(self, outputs):
    avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
    tensorboard_logs = {'val_loss': avg_loss}
    return {'avg_val_loss': avg_loss, 'log': tensorboard_logs} #this works!

def test_step(self, test_batch, batch_idx):
    [...]           
    return {'test_loss': loss}

def test_epoch_end(self, outputs):
    avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
    tensorboard_logs = {'MSE': avg_loss}
    print(f"Test Mean Squared Error (MSE): {avg_loss}")  #this works!                         
    return {'avg_test_loss': avg_loss, 'log': tensorboard_logs} #the issue might be here

Expected behavior

The expected behavior is for tensorboard_logs to contain the MSE, but when I open them in TensorBoard they don't contain MSE, only the val_loss and train_loss. The exact same code used to work in 0.7.1. So I believe some changes in 0.7.3 produced this bug. The print works, so the correct value is printed, but I assume there is some issue when you return 'log': tensorboard_logs.

Environment

github-actions[bot] commented 4 years ago

Hi! thanks for your contribution!, great first issue!

williamFalcon commented 4 years ago

ummm. i thought we fixed this in 0.7.3. can you post a colab to reproduce?

WSzP commented 4 years ago

ummm. i thought we fixed this in 0.7.3. can you post a colab to reproduce?

Thank you so much for the quick reply. https://colab.research.google.com/drive/1bexbN61LpWVZ106glFhAVF7Vz1jXQr1L Hopefully, this works. (I'm using Google Colab for the first time, I'm more of a localhost first -> deploy to AWS/Azure kind of guy.)

Borda commented 4 years ago

@WSzP we probably need also the dataset...

FileNotFoundError                         Traceback (most recent call last)
<ipython-input-9-0af11722af78> in <module>()
     20                      callbacks=[TestingCallbacks()]
     21                      )                
---> 22 trainer.fit(model)

3 frames
/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
    426         own_fid = False
    427     else:
--> 428         fid = open(os_fspath(file), "rb")
    429         own_fid = True
    430 

FileNotFoundError: [Errno 2] No such file or directory: '/content/uxm_train.npz'
williamFalcon commented 4 years ago

you can just use fake data generators with the right dimensions. this is just about logging anyhow

WSzP commented 4 years ago

you can just use fake data generators with the right dimensions. this is just about logging anyhow

Ok, I just changed the code to generate a random sparse matrix. Thanks for the idea.

WSzP commented 4 years ago

When I run it, I see the test score on the board...

I only see train_loss and val_loss, but not the test score.
Clipboard01

Borda commented 4 years ago

I think that I see the problem, it comes with introduces agg_and_log_metrics for the logger... https://github.com/PyTorchLightning/pytorch-lightning/blob/3f1e4b953f84ecdac7dada0c6b57d908efc9c3d3/pytorch_lightning/trainer/logging.py#L74 in this case, it is called and saved to accumulator till it receives another step or the logger is terminated which activated the flush results... https://github.com/PyTorchLightning/pytorch-lightning/blob/3f1e4b953f84ecdac7dada0c6b57d908efc9c3d3/pytorch_lightning/loggers/base.py#L232-L237 the solution is to replicate the same action to the logger.save()

Borda commented 4 years ago

@WSzP pls try this fix

! pip install https://github.com/PyTorchLightning/pytorch-lightning/archive/bugfix/flush-logger.zip -U
WSzP commented 4 years ago

It works like a charm. Thank you so much @Borda. Cheers!

Borda commented 4 years ago

Let's keep it open till the fix is merged to master...