thatvinhton / G-U-Net

MIT License
21 stars 4 forks source link

Where is the code of data augmentation mentioned in your paper? #1

Open chestnut1996 opened 4 years ago

chestnut1996 commented 4 years ago

Hi! This is a good job, but I can't find the code to generate the synthetic images. Could you tell me where it is? If not on the github, could you send me the email 201821562002@smail.xtu.edu.cn? Thanks.

thatvinhton commented 4 years ago

Hi! The code to generate these images are quite messy and currently we are occupied with other works. So we may not have plan to refactor and public them soon. But I can give you some hints.

As mentioned in paper, we are inspired by recent work on video object segmentation. They do upload their code to generate synthetic images in other domain and we partially used theirs. Here is the link to project: LucidDataDreaming.

About the inpainting method, we use this project to reconstruct the background before applying transformations on the nuclear.

The more details can be found in our paper.

Hope this can help your work!

chestnut1996 commented 4 years ago

Hi, I'm sorry to bother you again.

I used the the inpainting method you mentioned, but the result is bad.

I've trained the model a few times without pretrian model, and I think it's converged.

The losses is: loss

Here are my test images: 1 2 3 4

They look like they're taking the details out of the nucleus, not the whole nucleus.

Could you give me some advice? Or share the model weight?

thatvinhton commented 4 years ago

Hi!

We had a link to our weights in README (the google drive link), you can download them from there.

Just a few thoughts based on our experiments on this data augmentation approach, which you may find them relevant to your project:

Hope this help your project!

chestnut1996 commented 4 years ago

Maybe I'm not making myself clear.

The image I provided is a background image generated by the inpainting method, which is quite different from the schematic diagram provided in your paper.

As your paper says, my background is: TCGA-18-5592-01Z-00-DX1

The inpainted background generated by inpainting model I trained is: 4

Compared to Figure 4 (a) shown in your paper, the nucleus cannot be completely removed from the background.

Is that Ok?

thatvinhton commented 4 years ago

Hi!

From my view, it will not be ok if we can't completely remove the nucleus out of the images, especially on the inpainted backgrounds because we don't have the labels on those inpainted ones.

Is the inpainting model same as the one we used in our method? If you use that model, you could try to apply dilation morphological operation on the labels before creating the input (the image with white regions in your above comment) for inpainting models. As I remember, this inpainting model utilizes some local patterns, so it can reconstruct the nucleus based on some wrong labels (the leftover parts of the nucleus) around the white regions.

Apart from above guess, I couldn't see why it can construct the background so vividly. I can't distinguish them at a glance. You could also try other inpainting methods to see whether it produce different results.

chestnut1996 commented 4 years ago

Yes, I used inpainting model that you mentioned.

Dilation morphological operation is useful, but it can not completely erase the nucleus in my experiment.

Do you have any weights for the inpainting model? Could you share it with me?

Also, I'll try to other inpating methods.

Thanks.

thatvinhton commented 4 years ago

Unfortunately, I can't access my previous workspace any more because I have left the lab. You can try to look at the logs of the inpainting repo to see whether you can find other versions there. As I remember, I also used the version provided on their repo.

chestnut1996 commented 4 years ago

Thanks for your advice. I'll try.