Closed caprioGirl closed 3 years ago
You can check the Pannuke paper here and the associated link to download the data, it also contains HoVerNet results. https://arxiv.org/pdf/2003.10778.pdf The dataset has already been organized into 3 split. You can combine these 2 for training and use the other for validation. However, you may need to convert them into the form fit with this repos for training. We don't provide that script atm.
Refer to #123 to get more info on the data format you will need to convert PanNuke data to.
Also if you want to replicate how we trained HoVer-Net on PanNuke then make sure you change the model mode to fast
and accordingly modify the input and output patch shape in config.py
. Also, because the size of the input to the network is the same size as the patch in this scenario, you may want to use reflective padding in iaa.Affine
here.
Closing this issue now - please reopen if there are further questions.
I cant seem to reproduce the results on any dataset, i cant seem to find whats wrong. For example take kumar dataset for instance, I am getting the following results on it. Whereas your paper states the followong results:
I am using the exact same code, maybe the validation set is the problem? Or do i need to finetune the hyperparameters myself to get results as near as possible to yours? Btw
Another seperate question i wnated to ask was that the updated opt file currently has batch size 16 for the first 50 decoder runs. Whereas 8 was mentioned in the paper, was the 8 batch size used for all the datasets? Or were there any exceptions?
HI there, first of all you guys have done some amazing work thankyou for such a great network 😊! Now, moving on to the issue I wanted to know, how should I be training the network for PanNuke Dataset? And what was your resulting pq on PanNuke? Can you please guide me a bit? As I was reading some of the previous issues i saw someone mentioning that there is no need to run extract_patches.py on PanNuke's dataset, as the patches are already extracted so we just need to make sure that the dataset patches are in the same order as required by the network. How did you guys train the network for PanNuke dataset? How did you split the data? , it would be a great help if you could enlighten me a bit.... Thank you