Closed JLrumberger closed 1 month ago
Sounds good to me. It's nice that it doesn't require any knowledge of what the batch is at inference time, just that the whole batch is processed. This is pretty easy, since it's how people will give us the data anyway.
What is the best way to test if this makes a difference? Just run the full dataset, then look at the per-marker, per-dataset F1 scores and check if there's a difference?
I think the F1 scores and also the qualitative appearance of predictions that contained inconsistencies will be enough to determine if it made a difference.
Instructions
Implement the batch-effects normalization (BEN) training and inference scheme from Lin & Lu (2022). This involves three steps:
Relevant background We sometimes observe some inconsistencies in the predictions, especially for data that the model was not trained on (below is an example from Inna and Hadeeshas DCIS dataset). It might be due to the domain gap or due to normalization issues, thus, I'd like to try batch effect correction.
Design overview
make_pure_batches
to theModelBuilder
class that makes sure that batches only contain samples from one FOV of one dataset and one marker.tf.keras.layers.BatchNormalization
to make thecall
method always behave as in train mode. In addition I'd like to add a function that copys all variables from an instance oftf.keras.layers.BatchNormalization
to make it really easy to replace the default BN layers with this new one.Timeline Give a rough estimate for how long you think the project will take. In general, it's better to be too conservative rather than too optimistic.
Estimated date when a fully implemented version will be ready for review:
Estimated date when the finalized project will be merged in: