Open Lookforworld opened 9 months ago
you can refer to https://github.com/binary-husky/gpt_academic/blob/master/request_llms/bridge_chatglm3.py
it is very easy to implement llama directly in to GPT-Academic, some simply copy and paste will do. Let me know if you need any help
https://github.com/abetlen/llama-cpp-python
this ?
Yes! Their server module provides an OpenAI-like API, and I added the following code to bridge_all.py:
if "llama_cpp" in AVAIL_LLM_MODELS: # llama_cpp
try:
from .bridge_llama_cpp import predict_no_ui_long_connection as llama_cpp_noui
from .bridge_llama_cpp import predict as llama_cpp_ui
model_info.update({
"llama_cpp": {
"fn_with_ui": llama_cpp_ui,
"fn_without_ui": llama_cpp_noui,
"endpoint": openai_endpoint,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
}
})
except:
print(trimmed_format_exc())
Then I created a bridge_llama_cpp.py file under the 'request_llms' path and modified the relevant content, but all kinds of unexpected errors, simple dialogue is fine, but if you use plugins to explain the whole python project etc., there will be all kinds of unexpected errors! bridge_llama_cpp.zip
Class | 类型
大语言模型
Feature Request | 功能请求
Personally, I feel that the sever module of llama-cpp-python is very simple and easy to use, but I have been unable to add this part of the function to the library, can I add the API for this sever?