Lightning-AI / pytorch-lightning

Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
27.93k stars 3.34k forks source link

RichProgressBar doesn't display progress bar when using Comet logger. #11043

Closed ashleve closed 1 year ago

ashleve commented 2 years ago

🐛 Bug

RichProgressBar doesn't display progress bar when using Comet logger. I verified it works correctly with tensorboard and wandb.

To Reproduce

import comet_ml
import os

import torch
from pytorch_lightning import LightningModule, Trainer
from torch.utils.data import DataLoader, Dataset
from pytorch_lightning.loggers import CometLogger
from pytorch_lightning.callbacks import RichProgressBar

class RandomDataset(Dataset):
    def __init__(self, size: int, length: int):
        self.len = length
        self.data = torch.randn(length, size)

    def __getitem__(self, index):
        return self.data[index]

    def __len__(self):
        return self.len

class BoringModel(LightningModule):
    def __init__(self):
        super().__init__()
        self.layer = torch.nn.Linear(32, 2)

    def forward(self, x):
        return self.layer(x)

    def loss(self, batch, prediction):
        # An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
        return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))

    def step(self, x):
        x = self(x)
        out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
        return out

    def training_step(self, batch, batch_idx):
        output = self(batch)
        loss = self.loss(batch, output)
        return {"loss": loss}

    def training_step_end(self, training_step_outputs):
        return training_step_outputs

    def training_epoch_end(self, outputs) -> None:
        torch.stack([x["loss"] for x in outputs]).mean()

    def validation_step(self, batch, batch_idx):
        output = self(batch)
        loss = self.loss(batch, output)
        return {"x": loss}

    def validation_epoch_end(self, outputs) -> None:
        torch.stack([x["x"] for x in outputs]).mean()

    def test_step(self, batch, batch_idx):
        output = self(batch)
        loss = self.loss(batch, output)
        return {"y": loss}

    def test_epoch_end(self, outputs) -> None:
        torch.stack([x["y"] for x in outputs]).mean()

    def configure_optimizers(self):
        optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
        lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
        return [optimizer], [lr_scheduler]

    def train_dataloader(self):
        return DataLoader(RandomDataset(32, 64))

    def val_dataloader(self):
        return DataLoader(RandomDataset(32, 64))

    def test_dataloader(self):
        return DataLoader(RandomDataset(32, 64))

    def predict_dataloader(self):
        return DataLoader(RandomDataset(32, 64))

model = BoringModel()

logger = CometLogger(api_key=os.environ.get("COMET_API_TOKEN"))

trainer = Trainer(logger=logger, max_epochs=100, callbacks=[RichProgressBar()])
# trainer = Trainer(logger=logger, max_epochs=100)

trainer.fit(model=model)

Environment

cc @kaushikb11 @rohitgr7 @SeanNaren

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!

JackLin-Authme commented 2 years ago

Is there any update for this bug?

ItamarKanter commented 2 years ago

Any update on this issue? I'm experiencing the same problem when using comet logger and RichProgressBar

awaelchli commented 1 year ago

@ItamarKanter @JackLin-Authme I just tried this and can see the rich progress bar working fine. Is it possible that I am using a newer version of either rich or comet that now fixed the problem? Do you still have documentation of what version(s) you were using?

I'm closing the issue now, but if you find any more issues related to this we can continue the investigation.

Pedrexus commented 1 year ago

I still experience the issue. Adding more information on this, the progress bar DO show, however only after it has been completed. Moreover, any rich.print calls show no color, including the progress bar itself. The only solution I found is to stop using the Comet logger.

package versions: pytorch-lightning 1.9.0 comet-ml 3.32.0 rich 13.3.1