facebookresearch / barlowtwins

PyTorch implementation of Barlow Twins.
MIT License
965 stars 128 forks source link

Does this technique work on medical images ? #12

Closed ajoseph12 closed 3 years ago

ajoseph12 commented 3 years ago

Just wanted to know if anyone was able to test this technique on medical image and arrive at promising results.

Mushtaqml commented 3 years ago

I am trying with medical images but still figuring out the correct/best dimensions for the projection network. The loss is not converging if I use the default design and hyperparameters.

Did you try it with medical images?

ajoseph12 commented 3 years ago

Yea I have the same issue here, no loss convergence. I think I'm also going to try playing around with transformations, feels like the default ones are too strong to permit learning of any sorts. Would you like to take this conversation off issues to discuss a little more in detail? (email: allwyn_don32@ayahoo.com)

EBGU commented 3 years ago

Dear Joseph, I tried to send you an email, but your address didn't work. My address: jyj20@mails.tsinghua.edu.cn Currently, I am trying to use ResNet50 equipped with e2conv (https://github.com/QUVA-Lab/e2cnn) in BarlowTwins model with additional CLD loss (https://github.com/frank-xwang/CLD-UnsupervisedLearning) . According to papers, e2cnn would be helpful to medical images that have rotation equivalent (e.g. histological images), and CLD could accelerate converging. I would like to use this model to cluster some gray-scale images. My images are not medical images, but they are also in grey-scale, rotation equivalent, and monotonous. I think my data may share some similarities with yours. I started to train my network today, and so far my loss are dropping, and I will let you know if my design could work out.

Have fun with your experiment!

StphTphsn commented 3 years ago

Promising application! Please keep us updated with your progress on this thread for everyone to benefit, and don't hesitate to ask if you have any question for us. Thanks!

ajoseph12 commented 3 years ago

Hey Harold, Sounds interesting, can't wait to hear back from your concerning your experiments. For the moment I have tried soft augmentations, using outputs from different convolution layers, playing around with the learning rate and using pyramidal avg-pooling instead of global average pooling at the final layer. Most of the experiments for the moment haven't lead to significant loss convergence, except maybe for pyramidal avg-pooling where the loss was erratic.

Hope this helps :)

EBGU commented 3 years ago

I had some bad news. After some experiments, I found that the loss would drop significantly for thousands of steps, but then got bigger and bigger. I am still playing around with the learning rate, but it can be tricky. I wonder if you have tried simsiam?

aydindemircioglu commented 2 years ago

is there anyone with some experience on this topic? currently the loss does not go down properly, it seems stuck at around 4000. tried to change the learning rate, but doesnt seem to help in any way.

chokevin8 commented 1 year ago

Anyone have success?