Open palebluewanders opened 6 months ago
@SkalskiP
same issue
Traceback (most recent call last):
File "test3.py", line 9, in <module>
result = CLIENT.prompt_cogvlm(
File "/home/aicads/miniconda3/envs/vlm/lib/python3.8/site-packages/inference_sdk/http/client.py", line 88, in decorate
raise HTTPCallErrorError(
inference_sdk.http.errors.HTTPCallErrorError: HTTPCallErrorError(description='500 Server Error: Internal Server Error for url: http://localhost:9001/llm/cogvlm', api_message='Internal error.',status_code=500)
Hi, @palebluewanders and @YoungjaeDev any more details? Did you run it locally or in the cloud?
Hi, @palebluewanders and @YoungjaeDev any more details? Did you run it locally or in the cloud?
I've created and deployed a server locally and am using it
Hi, @palebluewanders and @YoungjaeDev any more details? Did you run it locally or in the cloud?
Mine was cloud, g5.2xlarge on AWS.
Hi, @palebluewanders and @YoungjaeDev any more details? Did you run it locally or in the cloud?
Mine was cloud, g5.2xlarge on AWS.
https://discuss.roboflow.com/t/sam-cogvlm-http-request-internal-error/5244
I am continuing on from here
Whenever I try to run
script.py
or follow instructions here: https://blog.roboflow.com/how-to-deploy-cogvlm/I always get this result:
{'message': 'Internal error.'}
Using Gradio also returns an error. Unfortunately there's no other clues.