Closed nataliameira closed 5 months ago
Sad to see that. Maybe you can try to change config:
"arch": { "type": "PANModel", "model_name": "TripleBranchWithSpecificConv_HLNoP", "loss_name": "many_loss_l1", "args": { "backbone": "resnet18", "fpem_repeat": 2, "pretrained": true, "segmentation_head": "FPEM_FFM", "is_dct": false, "is_light": true }
to
"arch": { "type": "PANModel", "model_name": "TripleBranchWithSpecificConv", "loss_name": "many_loss_l1", "args": { "backbone": "resnet18", "fpem_repeat": 2, "pretrained": true, "segmentation_head": "FPEM_FFM", "is_dct": false, "is_light": true }
and see how it will be. I think here "_HLNoP" in model_name means "High&Low Level Encoder and NoPerformer", so the output looks awful.
If you can display some example training triplet in your dataset, we can also check if it's a problem with the organisation of your dataset.
And the ideal results should belike:
For moire classification method, you can check section V.D Moiré Image Identification of Doing More With Moiré Pattern Detection in Digital Photos, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 32, 2023 for more detail.
Hopes that can be helpful!
Hello @Siztas, thank you for your valuable help!
I've tried other settings, but I don't think I've tried the one you mentioned (TripleBranchWithSpecificConv).
I will undertake new training to improve the results obtained.
😁
To generate the dataset, I first used the moirelayer.py
tool. The images below correspond to the first and second line, respectively, layers_ori
and layers_ori_pattern
. I did a brief review of the images to remove those with little or no visible moire pattern.
Then, I added a directory of natural images (images without moire effect or with natural moire effect).
All images have been resized 320x320.
So, I used the moire_data_generation-3.py
tool to generate the train and test directories containing: img
, moire
and combine
images. Then, I trained MoireDet.
Are there any wrong steps? Please feel free to ask questions.
Well, it seems like these are photos shot by yourself? MoireScape contains training triplet(image with moire, moire layer, image without moire) , and to generate a dataset for MoireDet training using your own photos, you need to obtain:
moirelayer.py
on pure moire images which were got by shoot a white screen)Then goto moire_data_generation-3.py
, change codes in:
# STEP - 1: Define the original dataset
# Moire imgs means the original moire camera images
moire_imgs_dir = '/your/own/screen-shoot/moire/images'
# Moire pattern means the extracted moire's edge By Cong Yang
# 特别提醒,此处生成的文件和原始Moire图片名字是完全对齐的 (don't care, it means generated images has same names as original moire images)
moire_patterns_dir = '/your/moire/edge/layers/images' (edge layers by using moirelayer.py on pure screen-shot moire images)
# Nature images means pure photo without any moire.
natures_imgs_dirs = [
'/coco',
'/imagenetsmall',
'/retail',
'/voc'
'/also/can/be/your/own/images' # here your don't need screen-shot photos, just using original images
]
# 最终生成的数据地址
dst_dir = "/where/you/want/to/restore/your/dataset"
and try to run it. If nothing goes wrong, your can see output like We will generate ( ) images
, and goto dst_dir
you set to found folder generated
, dataset should be laying here.
BTW, the images you shown seems like that you want to using some true screen-shot photos and screen-shot moire to train, that's cool, but obtain moire layer can be hard. My personal advice is to shoot screen on several specific angles, and for each angle shoot a white screen photo. You can try to use moirelayer.py
to get moire edge layer of all the photos shot on this angle.
But this can be a hard work.
If anything goes wrong, please let us know. Hope you train can be successful this time!
Thank you for your quick response to help me! After reading your report, I identified some mistakes I made.
I was unable to access MoireScape because the link leads to BaiduPan and I was unable to register an account to download the images.
Furthermore, I had understood that the moire border images should be made from the background images and that the white screen images were just a complement to the dataset (like an augmentation). This certainly affected the quality of my dataset.
So, I believe I now understand correctly! Thank you very much, I will try again! 🚀
Maybe I made some mislead here?Actually I mean that all the moire in your dataset should be obtained from the white screen photos. Using moirelayer.py
on background images (e.g. image from coco)will generate image with both moirelayer and edge of stuff on photos,but using it on white screen photo will not,cuz there are no edge can be detect on a white screen. moire_data_generation-3.py
actually then did a work that put these edge moire layer after rotate and some other transform on your background images(e.g. image from coco). This is what we do to generate MoireScape. For your own dataset,following a general pipeline like this can be helpful. Try to do it!🚀🚀
Ok I got it! Thanks again! 🚀🚀
Hello, @Siztas
I have retrained the model following your instructions. However, during prediction, the resulting images contain both moiré information and the image itself, rather than just the moiré layer. I am still investigating the cause of this issue.
Additionally, I noticed you have released a new repository. Should I use the same configuration you proposed here in the forum to conduct training in your new MoireDetPlus repository?
Thank you for your valuable assistance!
Hmmmm in this case maybe I can upload the pretrained model weights of MoireDet on Google Drive. I'll send your the link when I finished the upload. And I can try to upload MoireScape dataset to Google Drive. Actually sometimes MoireDet will give prediction with the edge of image, but in my experience it wouldn't often occur. You can wait the pretrained weight and use it to do the predict.
Ok, thanks, I'll wait.
Hi @nataliameira I finally find the weight and upload it. Now it's https://drive.google.com/file/d/1YiNz7k9y6oD3hNgI4jXEJRrW6wbRLihK/view?usp=sharing Hope it can be helpful!
Hello @Siztas,
I greatly appreciate your valuable assistance!
The results from my customized dataset have yielded additional image information. The weight you provide significantly improves image prediction. This is wonderful!
Thank you once again! 👋
Hmmmm in this case maybe I can upload the pretrained model weights of MoireDet on Google Drive. I'll send your the link when I finished the upload. And I can try to upload MoireScape dataset to Google Drive. Actually sometimes MoireDet will give prediction with the edge of image, but in my experience it wouldn't often occur. You can wait the pretrained weight and use it to do the predict.
Could you share MoireScape dataset?
Hello community! :wave:
I couldn't get the dataset through baidu pan, I couldn't get the pre-trained model and I also couldn't get any answers here.
So, I trained the MoireDet model using my own dataset with moire images and natural images.
Additionally, I used the following configuration in the training config.json file:
I trained for 145 epochs (fewer epochs provided inferior results).
In the end, I didn't get any significant results. Just "smudges" in the moire prediction. For example:
What did I do wrong? If you can reproduce this repository, please give me tips:
Your help will be appreciated! I hope the community continues to develop and contribute to this project!
Ok, now I'm going to continue my fake faces project hehehe Until later!! :grin: