codeproject / CodeProject.AI-Server

CodeProject.AI Server is a self contained service that software developers can include in, and distribute with, their applications in order to augment their apps with the power of AI.
Other
636 stars 143 forks source link

Vary inference image size based on model filename #36

Closed relevante closed 1 year ago

relevante commented 1 year ago

Look for imgsz-{size} in model filename and set inference image size to {size}

relevante commented 1 year ago

I wanted a way to use custom models based on yolov5l6 trained at 1280px, and this seemed like a relatively clean way to make it work while still making it easy for users to just drop models in without having to modify other files or set environment variables, etc.

ChrisMaunder commented 1 year ago

Unfortunately this change won't have any effect downstream. The resolution parameter is left over from the version 4 code that manually created the model detector. The new version using the YOLO python package no longer requires this.

relevante commented 1 year ago

@ChrisMaunder Just to clarify, does that mean that the release version will automatically run at the same resolution as the custom model saw during training?

EDIT: just saw your response in the forum. Thanks.

relevante commented 1 year ago

@ChrisMaunder Actually I just tested to be sure - I am on my fork and the size parameter does seem to have an effect here.

If I use a model called myyolov5l6-imgsz-1280.pt (which sets size=1280), it takes a total of 160-180ms to process. If I duplicate the file and name it myyolov5l6-nosize.pt (which leaves size=640), it only takes about 105ms. I also get a false detection on the -nosize version that I am not getting on the -imgsz-1280 version.

Just to confirm, I checked the md5 sum of both model files and they are identical except for the filename.

If I'm reading the code correctly, it seems like on line 309 of detect_adapter.py, forward() in AutoShape gets called, which does appear to use the size param.