Closed Sui-25 closed 2 months ago
Have no idea it suddenly ran successfully.
My personal guesses for the reasons are:
rm -r /tmp/cache
in the terminal.I personally found another solution, which is suitable for situations where only the inference results of the model are needed (Please forgive me, I’m a beginner.):
# import the inference-sdk
from inference_sdk import InferenceHTTPClient
# initialize the client
CLIENT = InferenceHTTPClient(
api_url="https://detect.roboflow.com",
api_key="your api_key"
)
# infer on a local image
result = CLIENT.infer("your image.jpg", model_id="your model_id")
Hope this can help other beginners.
Sorry again for occupying the public resources.
hi there. That error is truly strange. To debug that I would need to ask where you have your server running - is that cpu machine or GPU machine or Jetson?
Thank you very much for your reply!
I am running on my laptop, and the specifications of my laptop are as follows:
My Python environment has installed inference
and inference-gpu
, but I don't know if inference-gpu
is working.
When the error occurred, I simply used the following Python statement in VS Code to deploy the model locally:
from inference.models.utils import get_model
model = get_model(model_id=__ModelId, api_key=__ApiKey)
If it helps, I can describe the situation at the time (because I personally guess it has something to do with the network):
When I ran the Python program, the program did not respond at all. Since I observed in the task manager that the Python program was using the network, I think it was stuck at the get_model()
function.
At that time, I tried the following different ways to run the Python program to deploy the model (model_id: chinese-calligraphy-styles/1
) and model (model_id: chinese-calligraphy-recognition-sl0eb/2
):
_It should be mentioned that when I deployed the model (modelid: kitchenfire/1
), I did not use a VPN, and every time I ran Python, the deployment was successfully completed.
After many attempts as described above, it suddenly ran successfully.
Ok, so let me add few comments:
inference-gpu
- but to be honest Windows may be problematic in some cases 🙈 From this statement: After many attempts as described above, it suddenly ran successfully.
I assume you finally got the model running, right?
Thank you so much for your supplement!
Yes, I have now successfully and stably run it many times.
As for the reason for the exceptionally slow download speed, I personally guess it may also be related to the network environment in my area.
However, as a beginner in deep learning, the information I can provide is only so much, and I greatly appreciate your patient responses once again.
To avoid taking up your valuable time, I think maybe it's time to close this issue.
Thank you once again for all your responses!
Hi @Sui-25 , following your suggestion I will close this issue but if you do have further problems please do not hesitate to open another issue. Thank you!
Search before asking
Question
I apologize for any confusion my explanation may cause. I’m a beginner and I need to use roboflow inference to complete my project. There’s a problem that needs to be solved.
Here’s the situation: When using the get_model() function to get the model, an error occurred, as follows:
Additional
No response