Closed curtinmjc closed 1 month ago
@curtinmjc hi there! 😊
Thanks for reaching out and providing detailed information. Let's see if we can get to the bottom of this.
Version Check: First, ensure that you're using the latest version of the Ultralytics packages. Sometimes discrepancies can arise from outdated software.
Parameter Consistency: Double-check that all parameters used in the Python and cURL requests match those in the HUB Preview. Even small differences can lead to varied results.
Environment Differences: The HUB Preview might be using a slightly different environment or settings. Ensure that your local setup mirrors the HUB as closely as possible.
Image Preprocessing: Verify that the image preprocessing steps are consistent across all methods. Differences in resizing, normalization, or other preprocessing steps can affect the results.
Model Version: Confirm that the same model version is being used in both the HUB Preview and your API requests.
If you've checked all of the above and the issue persists, it might be helpful to log the requests and responses to see if there are any subtle differences.
Feel free to update us with any findings, and we'll continue to assist you. The YOLO community and the Ultralytics team are always here to help! 🚀
For more detailed information, you can also refer to our documentation.
Best of luck!
Thank you for the quick response to my issue/question. My comments on your items are detailed below. Forgive me if my responses require your help, I am relatively new to Ultralytics and YOLO. Also, I did not mean to close this issue/question.
1. Version Check: First, ensure that you're using the latest version of the Ultralytics packages. Sometimes discrepancies can arise from outdated software.
I'm not sure what you're asking me to check. The only version information I see is what is listed in the JSON response to the Postman cURL and Python code requests which is listed below.
"version": { "python": "3.11.10", "torch": "2.5.0+cpu", "torchvision": "0.20.0+cpu", "ultralytics": "8.3.17"
2. Parameter Consistency: Double-check that all parameters used in the Python and cURL requests match those in the HUB Preview. Even small differences can lead to varied results.
I was very careful to have parameter consistency. As you can see in my previous screenshots I used the code provided in the HUB Deploy tab (Ultralytics Shared Inference API) showing Example Request for Python code and cURL command.
3. Environment Differences: The HUB Preview might be using a slightly different environment or settings. Ensure that your local setup mirrors the HUB as closely as possible.
Again, I am not sure what you're asking me to check. I thought my settings were all correct since I used code provided in the Ultralytics HUB Deploy tab.
4. Image Preprocessing: Verify that the image preprocessing steps are consistent across all methods. Differences in resizing, normalization, or other preprocessing steps can affect the results.
As stated previously (and shown in the original screenshots) I simply used Python and cURL code provided in the HUB Deploy tab. No resizing, normalization, or other preprocessing steps were done to the test image that was dropped in the HUB Preview tab.
5. Model Version: Confirm that the same model version is being used in both the HUB Preview and your API requests.
The model being used is the same since I used the model URL provided in the HUB Deploy tab.
If you've checked all of the above and the issue persists, it might be helpful to log the requests and responses to see if there are any subtle differences.
Please elaborate on what you want me to do. Thanks.
@curtinmjc hello! 😊
Thank you for your detailed response. Let's address each of your points to help you resolve the issue:
Version Check: The version information you provided looks good. Just ensure that your local environment matches these versions. If you haven't already, try updating to the latest versions of Python, PyTorch, and Ultralytics to rule out any version-related issues.
Parameter Consistency: Since you've used the code from the HUB Deploy tab, it seems like the parameters should be consistent. Double-check that there are no additional headers or settings in your requests that might differ from the HUB Preview.
Environment Differences: The HUB might have specific configurations that are not immediately visible. While you've used the provided code, ensure that any environment variables or system settings are aligned with the HUB's setup.
Image Preprocessing: If you're using the provided code without modifications, preprocessing should be consistent. However, if there's any custom code or additional libraries involved, ensure they don't alter the image before inference.
Model Version: Since you're using the model URL from the HUB, this should be consistent. Just make sure there are no cached models or configurations that might differ.
To further investigate, you can log the requests and responses by printing them out in your Python script or using Postman's console. Look for any discrepancies in the request payload or response data.
If the issue persists, consider testing with a different image or model to see if the problem is specific to certain inputs.
Feel free to reach out with any more questions. The YOLO community and Ultralytics team are here to support you! 🚀
My answers to your latest response.
1. Version Check: The version information you provided looks good. Just ensure that your local environment matches these versions. If you haven't already, try updating to the latest versions of Python, PyTorch, and Ultralytics to rule out any version-related issues.
My Mac has the following versions:
There are no additional headers or settings in my requests.
3. Environment Differences: The HUB might have specific configurations that are not immediately visible. While you've used the provided code, ensure that any environment variables or system settings are aligned with the HUB's setup.
What specific configurations are you referring to that are not immediately visible?
4. Image Preprocessing: If you're using the provided code without modifications, preprocessing should be consistent. However, if there's any custom code or additional libraries involved, ensure they don't alter the image before inference.
There is no custom code or additional libraries involved.
5. Model Version: Since you're using the model URL from the HUB, this should be consistent. Just make sure there are no cached models or configurations that might differ.
What kind of model caching are you referring to? I am not doing anything on my Mac other than POSTING a URL with parameters specified by the Ultralytics model's Deploy tab.
To further investigate, you can log the requests and responses by printing them out in your Python script or using Postman's console. Look for any discrepancies in the request payload or response data.
Please elaborate... I am not sure what you are asking me to do? What are you concerned about that may be happening?
If the issue persists, consider testing with a different image or model to see if the problem is specific to certain inputs.
I have already tested with different models. My latest was with a v11x model trained on the same dataset. Of the 21 different image classes, using the Preview tab got every one of them correct. When I tested the very same images using the Python Inference test code 8 of the 21 images were incorrect and different that the results using the Preview tab. FYI, the v11x model had 3 more incorrect images than the v11n model.
Per your request I will try with new and different datasets today.
Please reopen this issue/question. Thanks.
Certainly! Here's a professional and engaging response:
Hi @curtinmjc! 😊
Thank you for your patience and for providing detailed feedback. Let's continue to work through this together.
Version Check: It seems like your local versions are slightly different from the ones in the JSON response. I recommend updating to the latest versions of Ultralytics and PyTorch to ensure compatibility.
Environment Differences: The HUB might have specific backend configurations that aren't visible, such as server-side optimizations or different hardware. While these aren't directly accessible, ensuring your local setup is as close as possible can help.
Model Caching: Sometimes models or configurations might be cached locally. If you're using any local storage or cache mechanisms, ensure they're cleared before running new inferences.
Logging Requests: To log requests and responses, you can print the request payload and response in your Python script. In Postman, use the console to view the request details. This can help identify any subtle differences.
Testing with New Data: It's great that you're testing with different datasets. This can help isolate whether the issue is dataset-specific.
If the issue persists, please ensure it's reproducible with the latest versions. Your insights are invaluable, and the YOLO community and Ultralytics team are here to support you! 🚀
Feel free to update us with any new findings or questions. We're here to help!
In general, I find it difficult to understand why things like Ultralytics and PyTorch versions would affect results of a cURL request to a Ultralytics model running in the HUB.
Regardless, I created a new dataset using iPhone photos of the paper images (with the YOLO11n Classify model) rather than their non-photo JPEG image representations that I was using before. The results of this change eliminated the issues I was experiencing. So that is good news. I will update this question if needed in the future.
One further question if you please... what is the effect of changing the conf parameter in the cURL command and Python code? The default value is 0.25. I thought that changing it to a higher value (e.g. 0.90) would eliminate inference API responses where the results had a "confidence" value of < 0.90. But the results I get are no different than when conf=.25.
Hi @curtinmjc! 😊
I'm glad to hear that using iPhone photos resolved the issues you were experiencing. It's great to see progress!
Regarding your question about the conf
parameter:
The conf
parameter sets the confidence threshold for predictions. Increasing it to a higher value like 0.90 should indeed filter out predictions with confidence scores below that threshold. If you're not seeing any difference, here are a few things to check:
Parameter Application: Ensure that the conf
parameter is correctly applied in your cURL and Python requests. Double-check the syntax to make sure it's being passed properly.
Response Parsing: Verify that the filtering logic is correctly implemented in your response handling. Sometimes, the filtering might need to be explicitly coded in your script.
Here's a quick example of how you might set the conf
parameter in a Python request:
import requests
url = "YOUR_API_ENDPOINT"
payload = {"conf": 0.90}
files = {"file": open("your_image.jpg", "rb")}
response = requests.post(url, files=files, data=payload)
print(response.json())
If you continue to experience issues, feel free to update us. The YOLO community and Ultralytics team are here to help! 🚀
Thank you for your engagement, and don't hesitate to reach out with more questions.
Thanks for the response to my conf parameter question, but instead of the Python code you provided let's use the example Python request code from the model's Deploy tab (below). I ran the Python code with "conf": 0.25 changed to "conf": 0.90, but the response was the same even if the returned confidence value was less than the "conf": 90 revised parameter setting. FYI, results are not changed when I change the conf parameter in Postman.
What would your recommended Python code be following the below example code provided in the model's Deploy tab?
import json import requests url = "https://predict.ultralytics.com" headers = {"x-api-key": "[api-key]"} data = {"model": "https://hub.ultralytics.com/models/3lPnazcnlAlQwaBzHaB7", "imgsz": 640, "conf": 0.25, "iou": 0.45} with open("path/to/image.jpg", "rb") as f: response = requests.post(url, headers=headers, data=data, files={"file": f}) response.raise_for_status() print(json.dumps(response.json(), indent=2))
Hi there! 😊
Thanks for sharing the details. Let's adjust the example code to ensure the conf
parameter is applied correctly. Here's how you can modify the code from the Deploy tab:
import json
import requests
url = "https://predict.ultralytics.com"
headers = {"x-api-key": "[api-key]"}
data = {
"model": "https://hub.ultralytics.com/models/3lPnazcnlAlQwaBzHaB7",
"imgsz": 640,
"conf": 0.90, # Set confidence threshold to 0.90
"iou": 0.45
}
with open("path/to/image.jpg", "rb") as f:
response = requests.post(url, headers=headers, data=data, files={"file": f})
response.raise_for_status()
print(json.dumps(response.json(), indent=2))
If the results still don't reflect the conf
parameter change, consider:
Server-Side Handling: Ensure the server is correctly interpreting the conf
parameter. Sometimes, server-side settings might override client-side parameters.
Response Verification: Double-check the response to ensure the confidence filtering is applied as expected.
If the issue persists, please verify with the latest versions of the packages. Feel free to reach out with any more questions. The YOLO community and Ultralytics team are here to help! 🚀
Thank you for your engagement!
As always, thanks for the quick response. My comments and questions to your response items:
Hello! 😊
Thank you for your detailed feedback and for sharing the screenshots. Let's address your points:
Python Code: If both requests return the same results despite different conf
values, it might indicate that the confidence threshold isn't being applied as expected. Typically, setting conf=0.90
should filter out predictions with confidence below 0.90. If this isn't happening, it could be a server-side issue.
Server-Side Handling: Since you're using the Ultralytics HUB, server-side settings should automatically handle the conf
parameter. However, if it's not working as expected, it might be a bug. I recommend verifying that the issue persists with the latest versions of the Ultralytics packages. If it does, please report it as a potential bug.
Response Verification: In the response, check if predictions with confidence below 0.90 are still present. If they are, and the conf
parameter is set correctly, it suggests the filtering isn't applied. This would be valuable information for a bug report.
If you continue to experience issues, please feel free to share more details. The YOLO community and Ultralytics team are here to support you! 🚀
Thank you for your engagement and patience!
Thank you for your response and recommendation. I have opened up a new Issue (Bug): Changing conf parameter Inference API code has no effect on results #906
@curtinmjc hi there! 😊
Thank you for opening a new issue and bringing this to our attention. We appreciate your diligence in reporting this potential bug.
Please ensure that you're using the latest versions of the Ultralytics packages to verify if the issue persists. This helps us determine if it's a version-related problem.
Feel free to provide any additional details or observations in the new issue. Your input is invaluable in helping us improve the platform.
If you have any more questions or need further assistance, don't hesitate to reach out. The YOLO community and Ultralytics team are here to support you! 🚀
Thank you for your engagement and patience!
Search before asking
Question
For about 20% of my Classification classes I get different results when I drop the test image in the HUB model Preview tab vs. what I get with both Python code and cURL (using Postman). Why is this possible with some of the classes?
Additional
In all cases the Preview tab gives the correct results answer and Python code and cURL with Postman give an incorrect In all cases the Preview tab gives the correct results answer and Python code and cURL with Postman give an incorrect answer (but they are identical to each other). My test images cover all the Classification classes and most class results are correct and the same for Preview, Python and cURL. The parameter settings used in Python and cURL are correct because I copied the sample code from the model's Deploy tab's Example Request code (Python & cURL) for Ultralytics Shared Inference API.
The Ultralytics model used is YOLO11n-CLS, although I have had the issue with other models. I am using a custom dataset that I uploaded.
I searched github issues and found https://github.com/ultralytics/hub/issues/683 for "Different predict result from preview and my app for same image" but that item is more complicated than my issue which uses Ultralytics supplied code.
Included screenshots: