Closed alleniver closed 6 months ago
ollama serve 命令执行前,先执行下面环境变量,否则其他服务无法访问: export OLLAMA_HOST="0.0.0.0:11434"
v37.4版本是可以直接支持的,改config.py的几个点即可(遵循one-api接口规范): config.py
API_KEY = "ollama-key" LLM_MODEL = "one-api-qwen:14b(max_token=32768)"
API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "http://your_ip:11434/v1/chat/completions"}
AVAIL_LLM_MODELS = ["one-api-qwen:14b(max_token=32768)"]
CUSTOM_API_KEY_PATTERN = "ollama-key"
https://github.com/binary-husky/gpt_academic/pull/1740
现在frontier(开发)分支有直接支持ollama的接入了,这样就可以同时接入OpenAI和ollama了
ollama serve 命令执行前,先执行下面环境变量,否则其他服务无法访问: export OLLAMA_HOST="0.0.0.0:11434"
v37.4版本是可以直接支持的,改config.py的几个点即可(遵循one-api接口规范): config.py
API_KEY = "ollama-key" LLM_MODEL = "one-api-qwen:14b(max_token=32768)"
API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "http://your_ip:11434/v1/chat/completions"}
AVAIL_LLM_MODELS = ["one-api-qwen:14b(max_token=32768)"]
CUSTOM_API_KEY_PATTERN = "ollama-key"
ollama-key?好像没在ollama文档里面看见这个东西 能详细说说吗
llama3 配置: LLM_MODEL = "ollama-llama3(max_token=4096)" AVAIL_LLM_MODELS = ["one-api-claude-3-sonnet-20240229(max_token=100000)", "ollama-llama3(max_token=4096)"] #如果你的模型是llama2,就填llama2,注意:一定不要填错 API_URL_REDIRECT = {"http://localhost:11434/api/chat": "http://:11434/api/chat"}# your address
下面是原因,感兴趣可以看
import requests url = 'http://*******:11434/api/chat' data = { "model": "llama3", "messages": [ { "role": "user", "content": "why is the sky blue?" } ] } response = requests.post(url, json=data)
打印响应内容 print(response.text)
Class | 类型
大语言模型
Feature Request | 功能请求
请求支持一下ollama,ollama的生态现在比较好。支持的模型也多,可以考虑支持一下,谢谢!