Closed caihaocong closed 6 months ago
Hello, you can modify "img_size" in args1.json.
Thank you for your reply.
“ if channel_mults == "": if img_size == 512: channel_mults = (0.5, 1, 1, 2, 2, 4, 4) elif img_size == 256: # 默认参数 channel_mults = (1, 1, 2, 2, 4, 4) elif img_size == 128: channel_mults = (1, 1, 2, 3, 4) elif img_size == 64: channel_mults = (1, 2, 3, 4) elif img_size == 32: channel_mults = (1, 2, 3, 4) else: raise ValueError(f"unsupported image size: {img_size}") ”
hello,is the large size 512 ? How to change it to "img_size": [1060,1060]?
Sorry, I haven't tried such a large image size. I think you might try resizing the image size, considering that larger images will bring greater training overhead.
Thank you very much. But I need to detect particularly tiny defects. I will try another solution.
Okay, here is a suggestion that you might consider: patchfy the image, for example, divide the 10241024 image into 4 512512 images, and then perform training and inference.
Ok,thanks a lot.
Hello, Zhanghui, thank you for your open-source code.
I have a question about the parameter of "img_size". How can I change "img_size": [256,256] to "img_size": [1060,1060], or "img_size": [1024,1024] ? Do I need to modify the original model? Can you give me some suggestions?