Closed DaIGaN2019 closed 3 months ago
Hi, thanks for your interest and questions!
Looking at HistoGene’s code, I found that the model sends all the data on an image into the model as a batch for training, while BLEEP takes a part of the entire training set (batchsize) as a batch for training. Therefore, I would like to ask if you will put HistoGene in the relevant settings of BLEEP or keep the original experimental settings of HistoGene when conducting comparative experiments.
Our datasets have many more spots per slice (~2000) compared to HistoGene (9,612 total spots across 32 slices). To keep evaluation fair and computationally manageable, we sampled ~128 neighboring spots per batch when training HistoGene.
HistoGene seems to have not separated the verification set to save the model, while BLEEP has separated 20% of the training set as the test set to save the model. Have you unified the relevant settings when conducting comparative experiments?
We followed the same training setup when computing the HistoGene baseline using their tutorial notebook (https://github.com/maxpmx/HisToGene/blob/main/tutorial.ipynb). There are indeed some differences, e.g. Histogene was trained to max epoch limit and final weight was selected for comparison as suggested by authors. BLEEP’s model is selected based on best validation loss instead.
Hello author, thank you for sharing your code, I am very interested in your work and have reproduced your code on my dataset! In your paper, BLEEP has been compared with ST-Net and HistoGene and achieved excellent results. However, there are some unexplained details involved in your paper, which makes me confused as I also want to carry out experimental comparison.