Open Matagi1996 opened 12 months ago
We tried the OpenAI package, and it worked fine. Try setting base_url to http://myip/v1
, or older OpenAI package (e.g. pip install openai<1
).
Tank you very much for clarification. Setting the address to just http://IP/v1 instead of adding "/chat/completions" like in the bash request did the trick.
With so many similar sounding APIs I was also not quite sure if i used the right one, so this is out of the way as well. Again thank you very much.
I installed the package in a Docker container (no official file for that at the moment if I see this correctly? Used a Ubuntu22.04 image cuda image)
I run the server and can access it via command line with the given commands. Now I wanted to reach it via Python, requests module works fine, but OpenAI API / other Langchain wrappers gives me following errors:
from openai import OpenAI client = OpenAI(api_key="dockerllmapikey",base_url='http://myIP/chat/completions') #just 'http://myIP/ gives same error ODEL = "openchat_3.5" response = client.chat.completions.create( model=MODEL, messages=[{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}], temperature=0, ) .... NotFoundError: Error code: 404 - {'detail': 'Not Found'}
Just to reiterate, the server is perfectly reachable.
import requests response = requests.post(url, headers=headers, json=data)
{'id': 'cmpl-13a69fec36204f9f954f875974a3586f', 'object': 'chat.completion', 'created': 1700015942, 'model': 'openchat_3.5', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': "....Poem and actuallt quite a nice one"}, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 32, 'total_tokens': 352, 'completion_tokens': 320}}
If I cant use OpenAI API, is there a Langchain wrapper that I can use or do I need to write my own Custom one like explained here and use requests internally? https://python.langchain.com/docs/modules/model_io/llms/custom_llm