Previously the precision and recall were like 50%. The shape classification accuracy went down from 80% to 60% but this still seems fine.
TBH I have no idea why this works. I was running inference to generate visualizations for the FRR and our model was making very inconsistent-looking video sequences, and I remembered when @Dat-Bois tried running last year's model with the yolov8 CLI on one of our IRL videos and it looked really good, so I just tried that and somehow we get god-tier bounding box performance. The reason I didn't notice this sooner is because when I tried this previously I kept tiling on, but this time it struck me that I should measure it quantitatively with tiling off.
Just look at this and nitpick me on code cleanliness and stuff cause I'm too tired to review myself rn.
Summary
Previously the precision and recall were like 50%. The shape classification accuracy went down from 80% to 60% but this still seems fine.
TBH I have no idea why this works. I was running inference to generate visualizations for the FRR and our model was making very inconsistent-looking video sequences, and I remembered when @Dat-Bois tried running last year's model with the yolov8 CLI on one of our IRL videos and it looked really good, so I just tried that and somehow we get god-tier bounding box performance. The reason I didn't notice this sooner is because when I tried this previously I kept tiling on, but this time it struck me that I should measure it quantitatively with tiling off.
Just look at this and nitpick me on code cleanliness and stuff cause I'm too tired to review myself rn.