Closed zaid478 closed 3 years ago
Hello @zaid478, Sorry for the delay!
You may find the training code here: https://github.com/paaatcha/my-thesis/blob/master/benchmarks/isic/isic.py But it depends on the pipeline in this repository: https://github.com/paaatcha/raug
In addition, the notebooks for GramOOD compute the predictions for the 8 classes in this function
def get_feat_maps(model, batch_img):
batch_img = batch_img.cuda()
with torch.no_grad():
preds = model(batch_img)
preds = F.softmax(preds, dim=1)
maps = feat_maps.copy()
feat_maps.clear()
return preds, maps
using this line: preds = model(batch_img)
This function is called inside the _get_sim_per_labels(data_loader, power, use_preds=True)
function. So, if want only to perform the inference, you can just use it.
I hope it can help you. Let me know if you need anything else. See you!
Hello, thank you for your work! Is there a preprocessing to be applied to the images before prediction? It is always outputting 5. In addition, which label corresponds to which class? Thank you in advance, Lucía
Hello @luantunez,
As we disclose in the paper, we preprocess the images using the shades of gray as a color constancy algorithm. If you download the data from the link we made available, they've already been preprocessed before. If you want to preprocess a new batch of images, I think this Kaggle kernel (that I wrote a few months ago) might be helpful: https://www.kaggle.com/apacheco/shades-of-gray-color-constancy. It's also implemented here: https://github.com/paaatcha/raug/blob/193a7eda79b6ff416a2bdf8c56fd1c4ae6fa5e9f/raug/utils/common.py#L192.
Regarding the labels, they are in alphabetical order ('AK', 'BCC', 'BKL', 'DF', 'MEL', 'NV', 'SCC', 'VASC'). So, AK is 0, BCC is 1, and so on.
Cheers
Thank you for you quick response! Sorry, I am still not getting accurate predictions with external images. Do you think the model would predict on any preprocessed image or do you have specifications? In addition, depending on the model checkpoint I use the result I get. Which model is more accurate? Do you combine them? Thank you, Lucia
So, I don't exactly what you're doing. Are you using the method to perform classification or to identify OOD samples? The models should provide proper predictions for dermoscopy lesions, even though it's well known that it might struggle to generalize a non-ISIC dataset. The performance of each model is also disclosed in the paper. I used them on ISIC challenge 2019, which I got 3rd place. I describe how I trained them here: https://arxiv.org/pdf/1909.04525.pdf
The code, checkpoints and preprocessing steps you provide are for the Gram-OOD or the Gram-OOD* approach? What is exactly the difference?
The checkpoints are just for the CNN models. Gram-OOD and Gram-OOD* use these models to detect OOD. So, they're just skin lesion classifiers trained on ISIC'19.
And are you using the OOD or the OOD classifier in this repo? I would like to access to the OOD since it reports the best results, please
On Thu, 25 Feb 2021 at 22:11 Andre Pacheco notifications@github.com wrote:
The checkpoints are just for the CNN models. Gram-OOD and Gram-OOD* use these models to detect OOD. So, they're just skin lesion classifiers trained on ISIC'19.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/paaatcha/gram-ood/issues/1#issuecomment-786339140, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMCCADUPMOCTDDPOBJQSESLTA3YN5ANCNFSM4TFHE5SA .
Both of them are implemented. The standard is the Gram-OOD*; however, if you want to perform the Gram-OOD, you just need to hook all layers, remove the normalization step, and using the power from 1 to 10. Using the MobileNet as an example (https://github.com/paaatcha/gram-ood/blob/master/sk_mobilenet.ipynb):
To hook all layers, uncomment this line:
# if isinstance(layer, models.mobilenet.ConvBNReLU) or isinstance(layer, nn.Conv2d):
To remove the normalization step, comment this line:
gram = norm_min_max(gram)
To use all powers:
power = (1,10)
(this is not a classification method, make sure you're aware of it!)
Hi,
Thank you for your work and for sharing the trained model on the ISIC dataset. I was wondering if you can share the training code or if you can tell me where the predictions for 8 classes are being computed in the repository.
Thank you for your work again,