Open chestnut1996 opened 4 years ago
Hi! The code to generate these images are quite messy and currently we are occupied with other works. So we may not have plan to refactor and public them soon. But I can give you some hints.
As mentioned in paper, we are inspired by recent work on video object segmentation. They do upload their code to generate synthetic images in other domain and we partially used theirs. Here is the link to project: LucidDataDreaming.
About the inpainting method, we use this project to reconstruct the background before applying transformations on the nuclear.
The more details can be found in our paper.
Hope this can help your work!
Hi, I'm sorry to bother you again.
I used the the inpainting method you mentioned, but the result is bad.
I've trained the model a few times without pretrian model, and I think it's converged.
The losses is:
Here are my test images:
They look like they're taking the details out of the nucleus, not the whole nucleus.
Could you give me some advice? Or share the model weight?
Hi!
We had a link to our weights in README (the google drive link), you can download them from there.
Just a few thoughts based on our experiments on this data augmentation approach, which you may find them relevant to your project:
Hope this help your project!
Maybe I'm not making myself clear.
The image I provided is a background image generated by the inpainting method, which is quite different from the schematic diagram provided in your paper.
As your paper says, my background is:
The inpainted background generated by inpainting model I trained is:
Compared to Figure 4 (a) shown in your paper, the nucleus cannot be completely removed from the background.
Is that Ok?
Hi!
From my view, it will not be ok if we can't completely remove the nucleus out of the images, especially on the inpainted backgrounds because we don't have the labels on those inpainted ones.
Is the inpainting model same as the one we used in our method? If you use that model, you could try to apply dilation morphological operation on the labels before creating the input (the image with white regions in your above comment) for inpainting models. As I remember, this inpainting model utilizes some local patterns, so it can reconstruct the nucleus based on some wrong labels (the leftover parts of the nucleus) around the white regions.
Apart from above guess, I couldn't see why it can construct the background so vividly. I can't distinguish them at a glance. You could also try other inpainting methods to see whether it produce different results.
Yes, I used inpainting model that you mentioned.
Dilation morphological operation is useful, but it can not completely erase the nucleus in my experiment.
Do you have any weights for the inpainting model? Could you share it with me?
Also, I'll try to other inpating methods.
Thanks.
Unfortunately, I can't access my previous workspace any more because I have left the lab. You can try to look at the logs of the inpainting repo to see whether you can find other versions there. As I remember, I also used the version provided on their repo.
Thanks for your advice. I'll try.
Hi! This is a good job, but I can't find the code to generate the synthetic images. Could you tell me where it is? If not on the github, could you send me the email 201821562002@smail.xtu.edu.cn? Thanks.