EnVision-Research / Defect_Spectrum

Defect Spectrum: A Granular Look of Large-Scale Defect Datasets with Rich Semantics [ECCV2024]
https://envision-research.github.io/Defect_Spectrum/
Apache License 2.0
79 stars 8 forks source link

I achieved the program, but now the results are not so good ,maybe I need try to train the model using different params . thanks #4

Open chenchaohui opened 1 month ago

AndysonYs commented 1 month ago

Hi. Would you like to show us some details about your project? Are you using defect-gen on your dataset?

chenchaohui commented 1 month ago

when I use the defect gen to my own dataset, the results are abnomal. but when I use huggingface data ,the results are nomal.

chenchaohui commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .

chenchaohui commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

I use the labelme to genarate mask data,and then, transform it to converted Groundytruth mask ,but the train result is very strange。

chenchaohui commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

I need you help, how Can I communicate with you soon. can you give me you wechat number or qq number,thank you very much.

chenchaohui commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

Maybe there is some mistakes when I make my own train mask, so I need you help maybe.

chenchaohui commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

I Rent cloud server with 4 RTX3090 ,However,the train speed with one RTX3090 is faster than four RTX3090?

AndysonYs commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .

Are you using the 2 stage defect-gen (combination of large and small receptive field models) for your data? Firstly, you can try using the large-receptive field model only and validate its performance. If the 2 stage defect-gen fails but the large model works well, that means the then you should adjust the hyper-param of the switch point of our 2 models and use more large receptive field model.

AndysonYs commented 1 month ago

To adapt to your own dataset, you may change some architecture hyper-params. For example, if your data has a higher resolution, you need to add more down-sampling layers in the diffusion unet.

AndysonYs commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

Maybe there is some mistakes when I make my own train mask, so I need you help maybe.

Could you tell me some features of your dataset? like the amount of data, the resolution, the num of defects. you can also post some examples here if available.

AndysonYs commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

I Rent cloud server with 4 RTX3090 ,However,the train speed with one RTX3090 is faster than four RTX3090?

It seems weird to me. Did you just change the nproc_per_node? if you change it from 1 to 4 without modifying any other hyper-params, then it means you train it for 4 times long.

chenchaohui commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

I Rent cloud server with 4 RTX3090 ,However,the train speed with one RTX3090 is faster than four RTX3090?

It seems weird to me. Did you just change the nproc_per_node? if you change it from 1 to 4 without modifying any other hyper-params, then it means you train it for 4 times long.

CUDA_VISIBLE_DEVICES="0,1,2,3" \ python -m torch.distributed.launch \ --nproc_per_node=4 \ this is the params I use. any other params should I change? can you give me other parmas name?

chenchaohui commented 1 month ago

you tell me some features of your dataset? like the amount of data, the resoluti the features of my dataset as follows: fracture_crop_07_24bit_rgb scatter_13_24bit resolusion: original resolution is 38405120, I just crop the defect part as 256256. the results I generated :

samples_image_000000_0_0 samples_image_000000_0_1 samples_image_000000_1_0 samples_image_000000_1_1 samples_image_000000_2_0 samples_image_000000_2_1 samples_image_000000_3_0

chenchaohui commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .

Are you using the 2 stage defect-gen (combination of large and small receptive field models) for your data? Firstly, you can try using the large-receptive field model only and validate its performance. If the 2 stage defect-gen fails but the large model works well, that means the then you should adjust the hyper-param of the switch point of our 2 models and use more large receptive field model.

yes,I use the 2 stage defect-gen for my data, the hyper-param of the switch point of our 2 models mean the param --step_inference 400 ,this one?

zhifeichen097 commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .

Are you using the 2 stage defect-gen (combination of large and small receptive field models) for your data? Firstly, you can try using the large-receptive field model only and validate its performance. If the 2 stage defect-gen fails but the large model works well, that means the then you should adjust the hyper-param of the switch point of our 2 models and use more large receptive field model.

yes,I use the 2 stage defect-gen for my data, the hyper-param of the switch point of our 2 models mean the param --step_inference 400 ,this one?

From the results you provided, it seems like the smaller model has too much involvement, which may disrupt the overall geometry of the image. I think you should start with the large receptive model only (exclude the small model, you can do that by commenting out the small model and setting the step_inference to 0, meaning you are only using the large model for inference) first. After verifying the image quality, you can start tuning the switching step by adjusting the same parameter. The switching parameter may works different in your dataset than ours.

chenchaohui commented 1 month ago

Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?

hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .

Are you using the 2 stage defect-gen (combination of large and small receptive field models) for your data? Firstly, you can try using the large-receptive field model only and validate its performance. If the 2 stage defect-gen fails but the large model works well, that means the then you should adjust the hyper-param of the switch point of our 2 models and use more large receptive field model.

yes,I use the 2 stage defect-gen for my data, the hyper-param of the switch point of our 2 models mean the param --step_inference 400 ,this one?

From the results you provided, it seems like the smaller model has too much involvement, which may disrupt the overall geometry of the image. I think you should start with the large receptive model only (exclude the small model, you can do that by commenting out the small model and setting the step_inference to 0, meaning you are only using the large model for inference) first. After verifying the image quality, you can start tuning the switching step by adjusting the same parameter. The switching parameter may works different in your dataset than ours.

when I just use the large reception model to infer, The result is also not so good ,just like the image I supply above. Why

chenchaohui commented 1 month ago

se the large reception model to infer, The result is also not so good ,just like the image I supply abov

I think my data type just like you data screw thread. from your paper,it shows the defect-gen result is so amazing,,but my experiment shows a bad result, now what should i do the achieve the the result like, dear author ,I need you help ,please help me ,thank you very much.