Open SaiJeevanPuchakayala opened 4 months ago
Hi @SaiJeevanPuchakayala 👋🏻 Looks like ultralytics
is resizing images before inference. This is probably because during training you passed imgsz = 2048
as argument. Try to update result = model(image, device='cuda')[0]
to result = model(image, device='cuda', imgsz=640)[0]
, rerun the script.
Hi @SaiJeevanPuchakayala 👋🏻 Looks like
ultralytics
is resizing images before inference. This is probably because during training you passedimgsz = 2048
as argument. Try to updateresult = model(image, device='cuda')[0]
toresult = model(image, device='cuda', imgsz=640)[0]
, rerun the script.
Thanks for the fix @SkalskiP.
The line of code below is working for me.
result = model(image, device='cuda', imgsz=512)[0]
But while stitching back the image the 512 images to 2048, I'm not able to get a few detections and annotations properly near the sliced region as shown in the image below.
@SaiJeevanPuchakayala, what overlap_ratio_wh
did you use? (0.2, 0.2)
? It looks like there is no overlap at all.
@SaiJeevanPuchakayala, what
overlap_ratio_wh
did you use?(0.2, 0.2)
? It looks like there is no overlap at all.
@SkalskiP I actually used overlap_ratio_wh
as (0.2, 0.2) both for training and inference and even played with it by increasing/decreasing it along with overlap filters (NMS, NMM, None), but still, the issue of lines in between a few detections and their corresponding segments while inference still exists as shown in the image below.
@SaiJeevanPuchakayala, what
overlap_ratio_wh
did you use?(0.2, 0.2)
? It looks like there is no overlap at all.@SkalskiP I actually used
overlap_ratio_wh
as (0.2, 0.2) both for training and inference and even played with it by increasing/decreasing it along with overlap filters (NMS, NMM, None), but still, the issue of lines in between a few detections and their corresponding segments while inference still exists as shown in the image below.
@SkalskiP I'm still facing the same issue. Is there any resolution for this that you can suggest?
@SaiJeevanPuchakayala, have you tried playing with iou_threshold
as well?
@SaiJeevanPuchakayala, have you tried playing with
iou_threshold
as well?
Yes @SkalskiP, I've tried that as well, no hope.
@SaiJeevanPuchakayala, can you provide us with your model, image, and code?
@SaiJeevanPuchakayala, can you provide us with your model, image, and code?
Hi @SkalskiP,
Thank you for your continuous support. I have uploaded the model, image, and code to a Google Drive folder for your reference. You can access it here: https://drive.google.com/drive/folders/1nn08DGO7-I1rRX-5Czm_tFn7hWV5J9IN?usp=sharing
Here are some additional details about the model:
Please have a look and let me know if you need any additional information.
@SaiJeevanPuchakayala, realistically, I won't be able to take a deeper look into it. Sorry, I have too much work to do with the upcoming YT video. This would need to wait for @LinasKo to get back next week. Maybe @onuralpszr or @hardikdava have some time to take a look into it?
@SkalskiP sure, I am going to take look let me setup in collab and start playing with it.
@onuralpszr, you are the GOAT! 🐐
@onuralpszr, you are the GOAT! 🐐
Likewise you too :) Let me do some testing get back to comments with my findings
I'll update the name of the issue to better reflect what's going on.
@SkalskiP sure, I am going to take look let me setup in collab and start playing with it.
Hey Hi @onuralpszr 👋, have you got time to do some testing?
@SkalskiP sure, I am going to take look let me setup in collab and start playing with it.
Hey Hi @onuralpszr 👋, have you got time to do some testing?
Yes, I am looking and let me bit more test and will come back to you, I had some busy tasks I had handle as well, sorry for bit delay. I will come back to you
Hi @SaiJeevanPuchakayala and @onuralpszr, have you managed to track down the problem?
Hi @SaiJeevanPuchakayala and @onuralpszr, have you managed to track down the problem?
Hey @SkalskiP, not yet, still facing the same issue with grains not being stiched back.
Hi @SaiJeevanPuchakayala and @onuralpszr, have you managed to track down the problem?
I did stuck for a bit but made progress afterwards. I had a busy work week, let me handle tackle today and show some result and talk based on it. Sorry for delay.
@onuralpszr Let me know if you need extra pair of hands on this one.
No worries, @onuralpszr ;) I'm just curious why it does not work as expected.
@onuralpszr Let me know if you need extra pair of hands on this one.
@onuralpszr and @hardikdava, have you guys got time to track down the problem?
Hey @onuralpszr, can you post your findings on this issue here? I can take a look later today. Thanks.
I am experiencing an issue where the model is performing inference on the full image resolution of 2048x2048 instead of the sliced resolution of 512x512 as intended. Below are the details of the function and the problem encountered.
Output
Issue
The model is performing inference on the full image resolution (2048x2048) instead of the sliced resolution (512x512) as specified in the InferenceSlicer. This results in longer inference times and processing on larger image chunks than intended.
Steps to Reproduce
Expected Behavior
The model should perform inference on 512x512 slices of the image, as specified in the InferenceSlicer.
Actual Behavior
Inference is performed on the full 2048x2048 image resolution.
Environment
Additional Context
Any insights or suggestions to ensure the model performs inference on the specified 512x512 slices would be greatly appreciated.