binary-husky / gpt_academic

为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。
https://github.com/binary-husky/gpt_academic/wiki/online
GNU General Public License v3.0
65.5k stars 8.05k forks source link

[Feature]: 通过反向代理接入openai chatgpt #900

Closed hongyi-zhao closed 1 year ago

hongyi-zhao commented 1 year ago

Class | 类型

大语言模型

Feature Request | 功能请求

有如下两个特性,非常不错:

  1. Cloudflare bypass proxy 支持,比如下面这个:https://github.com/acheong08/ChatGPT-Proxy-V4
  2. 基于1. 构建的反向代理,并利用Access token访问,从而绕过OpenAI API过期问题。
hongyi-zhao commented 1 year ago

Based on the discussions in "GPT Academic Developers #chat2 群号 610599535", patching as follows does the trick:

$ git log -1
commit 27f65c251a83c9b19ea5707938ae51683f1f2d8a (HEAD -> master, origin/master, origin/HEAD)
Author: binary-husky <96192199+binary-husky@users.noreply.github.com>
Date:   Mon Jul 31 15:57:18 2023 +0800

    Update 图片生成.py
$ git diff
diff --git a/request_llm/bridge_chatgpt.py b/request_llm/bridge_chatgpt.py
index ea48fba..96af833 100644
--- a/request_llm/bridge_chatgpt.py
+++ b/request_llm/bridge_chatgpt.py
@@ -186,15 +186,16 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
                 try:
                     chunk_decoded = chunk.decode()
                     # 前者是API2D的结束条件,后者是OPENAI的结束条件
-                    if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0):
+                    if 'data: [DONE]' in chunk_decoded:
                         # 判定为数据流的结束,gpt_replying_buffer也写完了
                         logging.info(f'[response] {gpt_replying_buffer}')
                         break
                     # 处理数据流的主体
                     chunkjson = json.loads(chunk_decoded[6:])
                     status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}"
-                    # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
-                    gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk_decoded[6:])['choices'][0]["delta"]["content"]
+                    delta = chunkjson['choices'][0]["delta"]
+                    if "content" in delta:
+                        gpt_replying_buffer = gpt_replying_buffer + delta["content"]
                     history[-1] = gpt_replying_buffer
                     chatbot[-1] = (history[-2], history[-1])
                     yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面

Build, config, start, and test ChatGPT-to-API as follows:

$ git clone https://github.com/acheong08/ChatGPT-to-API.git && cd ChatGPT-to-API && go build
# Create the following configuration files and adjust their content according to your environment:
$ cat accounts.txt 
username:password
$ cat proxies.txt 
socks5://127.0.0.1:18890
$ SERVER_PORT=18080 ./freechatgpt

Then, tell gpt_academic the corresponding endpoint as follows:

API_URL_REDIRECT='{"https://api.openai.com/v1/chat/completions": "http://127.0.0.1:18080/v1/chat/completions"}'

See below for the related discussions:

https://github.com/acheong08/ChatGPT-to-API/issues/104 https://github.com/linweiyuan/go-chatgpt-api/issues/236

binary-husky commented 1 year ago

Add to wiki for conclusion:

https://github.com/binary-husky/gpt_academic/wiki/%E7%AC%AC%E4%B8%89%E6%96%B9API%E2%80%90KEY%E6%8E%A5%E5%85%A5%E6%8C%87%E5%8D%97

binary-husky commented 1 year ago

image