Open robmarkcole opened 4 months ago
Another option would be to remove any figure logging from TorchGeo entirely. If users want it, they can subclass the trainer and add it. This feature has historically been a nightmare to maintain/test and has been riddled with bugs. The proposal here is to make it even more complicated.
More positively, can you start a discussion with the Lightning folks to see if there is an easier way to do this automatically? Or a way that can unify the interface somehow? This just seems like too much work to support out of the box.
RE lightning I see a lot of work was done in https://github.com/Lightning-AI/pytorch-lightning/pull/6227 but ultimately they decided this was too complex to maintain, and closed the PR.
Perhaps as you say, logging figures should be left to users to subclass the trainers. I think basic logging of samples on the dataset is a necessity however
I've actually been moving away from adding the plotting directly inside the LightningModule and using a custom Callback instead. There are methods like on_validation_batch_end
which have access to the current batch
, module
, batch_idx
, etc. which can essentially replicate what the current figure logging was doing.
I agree that using the callbacks will be the most intuitive approach. Do we think it's best to utilise them or just have a 'best practice' guide perhaps?
Probably better to have in a tutorial or just reference the Lightning docs so we don't have to maintain them.
Been reading on Rastervision, perhaps we can utilise its plotting functionality with lightning callbacks - a simple tutorial could suffice https://docs.rastervision.io/en/stable/usage/tutorials/visualize_data_samples.html
Is there something wrong with TorchGeo's plotting functionality (i.e., dataset.plot(sample)
)? It's certainly not perfect, but I don't know if Raster Vision's is any better.
Nothing wrong - I thought at some point there was a comment or issue about externalising the plotting
I'm for this (even getting rid of the datamodule plots) on the condition that we write a tutorial that shows how to subclass and do this with tensorboard. torchgeo doesn't have to (and shouldn't try to) do everything by itself --- the more that we can offload to the user with tutorials the better! This is definitely a good example as I've had to overwrite all of val_step specifically to implement mlflow logging for Azure ML in my own projects.
I'm all for tutorials - good to hear you are also mlflow user as that is also in my stack and I've made a bunch of customisations to support it, be good to share best practice
Summary
Currently logging of figures (e.g. in Classification and Segmentation trainers) assumes Tensorboard logger which has the method
add_figure
. However other loggers have similar methods but with different names and are currently not supported. e.g. MLflow uses log_figure. This FR is to support these loggers and document which are supportedRationale
Many orgs use a service for logging such as mlflow or wandb etc, and it would be nice for these to just work.
Implementation
We would probably want to abstract this for use by all trainers, but adding this method is one soluton
Alternatives
None
Additional information
No response