Open z7r7y7 opened 9 months ago
I have the same problem. Did you manage to find a solution? @z7r7y7 I have set my K-value to 1 and have confirmed that it is running on my GPU. Setup:
I see that there are numerous open issues inquiring about slow inference time @mlzxy ?
I also have the same problem. Has anyone found a fix?
Thanks
Hello, thank you for providing this excellent work. However, I have encountered an issue while using the model for inference. I have utilized fifteen images and generated a .pth file using the build_prototypes.ipynb script. I am loading the label names and test class weights from this file. Here are the rest of my settings: `config_file="configs/open-vocabulary/lvis/vitl.yaml", rpn_config_file="configs/RPN/mask_rcnn_R_50_FPN_1x.yaml", model_path="configs/open-vocabulary/lvis/vitl_0069999.pth",
Image size: 1280x720 Number of recognition categories: 4 Unfortunately, I have noticed that the inference time for each image exceeds 45 seconds. I would like to inquire if this speed is normal, or if there are any measures I can take to reduce the inference time. Any guidance or suggestions would be greatly appreciated.