YaoyaoZhu19 / BRSDA

0 stars 0 forks source link

How is the feature extractor defined? #1

Open Hogandr opened 2 weeks ago

Hogandr commented 2 weeks ago

Hey @YaoyaoZhu19 , I find your paper very interesting! However, I have a question about how you built the feature extractor and what exactly it is supposed to be for. Specifically, what is the input for your data augmentation, particularly in the case of a 3D image? Is it the entire image? Or is it a vector of transfer learned latent variables?

I would greatly appreciate your feedback, as I am currently working on a project involving tumor classification based on PET scans.

Regards, Hagen

YaoyaoZhu19 commented 1 week ago

Hey Hagen,

Thank you for your interest in BSDA.

Regarding your question about the input and feature extractor for BSDA:
BSDA receives deep features from the network. Taking ResNet as an example, these are the features before the network's classifier, which is a one-dimensional vector (even for 3D networks, as it has been pooled and flattened).
From another perspective, the input to BSDA comprises latent variables.
The code has been uploaded to https://github.com/YaoyaoZhu19/BSDA.

Best regards,
Yaoyao

Hogandr commented 1 week ago

Hey Yaoyao,

Thanks for the feedback! It helps me a lot. :)

Alternatively, do you have any experience with other methods that might be easier to implement and worked well with really small datasets? For example, the cutout method seems to perform best on the Breast dataset, and there are only 780 samples.

In my case, I have 150 PET images, and the classification is binary between two glioma types. So, my number of images is significantly lower.

Best regards, Hagen

YaoyaoZhu19 commented 1 week ago

@Hogandr Oh, I see what you need; I think you might need this https://docs.monai.io/en/stable/transforms.html#regularization