Open gau-nernst opened 2 years ago
We tried to provide image logging and similar in a consistent way in the past, but it never worked out well. See https://github.com/Lightning-AI/lightning/issues/11837, https://github.com/Lightning-AI/lightning/issues/12183. The issue is that the APIs for logging images are so vastly different for each logger, that the user needs to change the code anyway (the input format of log_image).
If unified image logging API is not possible or does not work well, and user is expected to have custom code for each logger type, then having .log_image()
for only Wandb logger (I notice that .log_image()
for Neptune is removed) is unfair to other loggers and provide inconsistent user experience.
Currently .log_image()
for Wandb logger is a wrapper for creating wandb.Image
objects and log the images. I think it is quite straightforward to implement similar functionality for Tensorboard logger.
Regarding the input format, Wandb .log_image()
expects a list of image, in torch.Tensor, np.ndarray, or PIL.Image. Tensorboard logger from PyTorch link accepts torch.Tensor, np.ndarray format, so it is quite similar for image format (I suppose PIL.Image -> np.ndarray conversion is pretty trivial).
If we keep List[Image]
signature, we can iterate over the list and call .add_image()
from PyTorch's Tensorboard (which is identical to what Wandb logger is doing, iterating the list and convert them to wandb.Image
). Perhaps adding support for PIL.Image by internally converting to np.ndarray.
There is also .add_images()
from PyTorch's Tensorboard, which takes in a batched tensor and log a grid of images. This is a bit different from logging each image individually.
Last few points (and reiterate some)
.log_image()
support for Tensorboard logger only, not for all loggers. I do understand that adding the functionality for all loggers is a much bigger task..log_image()
for Wandb logger. By the same argument, user can have custom code to log images for Wandb without .log_image()
support from PyTorch Lightning..log_image()
API, I wouldn't think it is a big issue (unless I miss out important details). Function signature is similar (key, images) with perhaps optional kwargs to pass on to respective loggers internally. For image formats, Lightning can either (1) only support a fixed number of formats, or (2) handle image format conversion internally, or something in between (trust the user to pass in correct image format for their logger). However, I think loggers that support logging images already support quite a lot of image formats.This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions - the Lightning Team!
🚀 Feature
Currently Wandb and Neptune loggers have
.log_image()
method. It would be great to have the same method for Tensorboard also, so that we don't need to modify the code when changing the logger.Motivation
For quick experimentations, usually I start with Tensorboard logger first. Once the code is more stable and I want to do long-training, I switch to Wandb so that I can monitor the training progress remotely. Having the same method simplifies the user code.
Pitch
Currently Tensorboard logger can log images by directly call the
.experiment.add_image()
method. The new.log_image()
method would simply call this method.Alternatives
To handle different ways of logging images depending on the logger, user code either needs to (1) detect which logger is being used, and call the corresponding method to log images, or (2) override Tensorboard logger with the proposed change. Both options are not ideal.
Additional context
If you enjoy Lightning, check out our other projects! âš¡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.
cc @borda @awaelchli @Blaizzy