Wandb seems set up nicely for logging media but it literally does not have a way to log just text. You could do some real janky stuff with tables but it seems like we'd need to upload a whole new table every time we want to log.
It may be a good idea to create a normal Python logger and pass that into the Callbacks object. Then, we can also log things like accuracy and print out when we get to different stages of our training.
For getting the current epoch and stuff, Lightning has a few handy things like the batch_end callbacks all take in the pl_module, which has pl_module.current_epoch() as part of it. It still may be beneficial to us to keep a running log of self.train_epoch_idx and stuff in the as fields of the Callbacks class we create. Then we can interact with that in our callbacks. This might make our Callbacks class bigger and a bit grosser but should still be better than keeping track of everything in the train/val loops like in vanilla Pytorch.
Wandb seems set up nicely for logging media but it literally does not have a way to log just text. You could do some real janky stuff with tables but it seems like we'd need to upload a whole new table every time we want to log.
It may be a good idea to create a normal Python logger and pass that into the Callbacks object. Then, we can also log things like accuracy and print out when we get to different stages of our training.
For getting the current epoch and stuff, Lightning has a few handy things like the batch_end callbacks all take in the pl_module, which has pl_module.current_epoch() as part of it. It still may be beneficial to us to keep a running log of self.train_epoch_idx and stuff in the as fields of the Callbacks class we create. Then we can interact with that in our callbacks. This might make our Callbacks class bigger and a bit grosser but should still be better than keeping track of everything in the train/val loops like in vanilla Pytorch.