Closed ant0nsc closed 3 years ago
@mmuckley , agree that it adds complexity to the code.
On the other side, as it stands now, I can't re-use the MriModule
at all because it effectively hardcodes a single logger that must be Tensorboard (or any single logger that happens to have an add_image
method). I'd like to log to AzureML and Tensorboard - so at present, my code will fail if I just take an MriModule
as is.
The best solution would be to tackle that at PyTorch Lightning level, by adding log_image
methods, that just don't do anything if the logger does not support it. However, that would clearly be a lot longer effort.
Do you have any suggestions about how you could relax the "single Tensorboard logger" constraint?
@ant0nsc, I think you mentioned the best solution in your comment. You could split the image logging off from validation_step_end
to a separate function. Future users could subclass MriModule
and overwrite that function if they use a new logger. That should facilitate re-use of MriModule
without requiring people to dive deep into fastmri
.
What do you think of updating this PR to do that? You could keep the list of loggers functionality. To be specific, I'm thinking of replacing these lines with a separate class function like:
def log_images(self, image_list):
# write images to loggers
where image_list
would be a list of tuples of (key, target, output, error)
. Future users would subclass MriModule
and overwrite the log_images
function.
As an aside, we now use black
for formatting as stated in our CONTRIBUTING.md. You can see the exact version of black
tested by CircleCI in dev-requirements.txt.
Thanks @mmuckley , very good suggestion! Updated the PR accordingly.
Closes #133
The present code to log images during training assumes that there is a single Lightning logger in place, and that it is a
TensorboardLogger
. This PR changes that to allow for multiple loggers, and logs images only to those that are TensorboardSummaryWriters
.