Open chenchaohui opened 1 month ago
when I use the defect gen to my own dataset, the results are abnomal. but when I use huggingface data ,the results are nomal.
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
I use the labelme to genarate mask data,and then, transform it to converted Groundytruth mask ,but the train result is very strange。
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
I need you help, how Can I communicate with you soon. can you give me you wechat number or qq number,thank you very much.
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
Maybe there is some mistakes when I make my own train mask, so I need you help maybe.
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
I Rent cloud server with 4 RTX3090 ,However,the train speed with one RTX3090 is faster than four RTX3090?
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .
Are you using the 2 stage defect-gen (combination of large and small receptive field models) for your data? Firstly, you can try using the large-receptive field model only and validate its performance. If the 2 stage defect-gen fails but the large model works well, that means the then you should adjust the hyper-param of the switch point of our 2 models and use more large receptive field model.
To adapt to your own dataset, you may change some architecture hyper-params. For example, if your data has a higher resolution, you need to add more down-sampling layers in the diffusion unet.
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
Maybe there is some mistakes when I make my own train mask, so I need you help maybe.
Could you tell me some features of your dataset? like the amount of data, the resolution, the num of defects. you can also post some examples here if available.
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
I Rent cloud server with 4 RTX3090 ,However,the train speed with one RTX3090 is faster than four RTX3090?
It seems weird to me. Did you just change the nproc_per_node? if you change it from 1 to 4 without modifying any other hyper-params, then it means you train it for 4 times long.
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
I Rent cloud server with 4 RTX3090 ,However,the train speed with one RTX3090 is faster than four RTX3090?
It seems weird to me. Did you just change the nproc_per_node? if you change it from 1 to 4 without modifying any other hyper-params, then it means you train it for 4 times long.
CUDA_VISIBLE_DEVICES="0,1,2,3" \ python -m torch.distributed.launch \ --nproc_per_node=4 \ this is the params I use. any other params should I change? can you give me other parmas name?
you tell me some features of your dataset? like the amount of data, the resoluti the features of my dataset as follows: resolusion: original resolution is 38405120, I just crop the defect part as 256256. the results I generated :
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .
Are you using the 2 stage defect-gen (combination of large and small receptive field models) for your data? Firstly, you can try using the large-receptive field model only and validate its performance. If the 2 stage defect-gen fails but the large model works well, that means the then you should adjust the hyper-param of the switch point of our 2 models and use more large receptive field model.
yes,I use the 2 stage defect-gen for my data, the hyper-param of the switch point of our 2 models mean the param --step_inference 400 ,this one?
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .
Are you using the 2 stage defect-gen (combination of large and small receptive field models) for your data? Firstly, you can try using the large-receptive field model only and validate its performance. If the 2 stage defect-gen fails but the large model works well, that means the then you should adjust the hyper-param of the switch point of our 2 models and use more large receptive field model.
yes,I use the 2 stage defect-gen for my data, the hyper-param of the switch point of our 2 models mean the param --step_inference 400 ,this one?
From the results you provided, it seems like the smaller model has too much involvement, which may disrupt the overall geometry of the image. I think you should start with the large receptive model only (exclude the small model, you can do that by commenting out the small model and setting the step_inference to 0, meaning you are only using the large model for inference) first. After verifying the image quality, you can start tuning the switching step by adjusting the same parameter. The switching parameter may works different in your dataset than ours.
Hi. Would like to show us some details about your project? Are you using defect gen on your dataset?
hello, AndysonYs, When I use my own dataset ,the result is abnomal,,the generated data is very different with input data .
Are you using the 2 stage defect-gen (combination of large and small receptive field models) for your data? Firstly, you can try using the large-receptive field model only and validate its performance. If the 2 stage defect-gen fails but the large model works well, that means the then you should adjust the hyper-param of the switch point of our 2 models and use more large receptive field model.
yes,I use the 2 stage defect-gen for my data, the hyper-param of the switch point of our 2 models mean the param --step_inference 400 ,this one?
From the results you provided, it seems like the smaller model has too much involvement, which may disrupt the overall geometry of the image. I think you should start with the large receptive model only (exclude the small model, you can do that by commenting out the small model and setting the step_inference to 0, meaning you are only using the large model for inference) first. After verifying the image quality, you can start tuning the switching step by adjusting the same parameter. The switching parameter may works different in your dataset than ours.
when I just use the large reception model to infer, The result is also not so good ,just like the image I supply above. Why
se the large reception model to infer, The result is also not so good ,just like the image I supply abov
I think my data type just like you data screw thread. from your paper,it shows the defect-gen result is so amazing,,but my experiment shows a bad result, now what should i do the achieve the the result like, dear author ,I need you help ,please help me ,thank you very much.
Hi. Would you like to show us some details about your project? Are you using defect-gen on your dataset?