zhayujie / chatgpt-on-wechat

基于大模型搭建的聊天机器人,同时支持 微信公众号、企业微信应用、飞书、钉钉 等接入,可选择GPT3.5/GPT-4o/GPT-o1/ Claude/文心一言/讯飞星火/通义千问/ Gemini/GLM-4/Claude/Kimi/LinkAI,能处理文本、语音和图片,访问操作系统和互联网,支持基于自有知识库进行定制企业智能客服。
https://docs.link-ai.tech/cow
MIT License
30.85k stars 8.07k forks source link

ollama serve + COW, 部署失败 #1839

Closed taozhiyuai closed 7 months ago

taozhiyuai commented 7 months ago

前置确认

⚠️ 搜索issues中是否已存在类似问题

操作系统类型?

MacOS

运行的python版本是?

other

使用的chatgpt-on-wechat版本是?

Latest Release

运行的channel类型是?

wx(个人微信, itchat)

复现步骤 🕹

python=3.11

config.json中 "model": "get-3.5-turbo", "open_ai_api_key": "ollama", "open_ai_api_base": "http://127.0.0.1:11434/v1",

正常启动

问题描述 😯

截屏2024-03-26 12 23 35

终端日志 📒


[INFO][2024-03-26 12:19:30][config.py:256] - [INIT] load config: {'channel_type': 'wx', 'model': 'get-3.5-turbo', 'open_ai_api_key': 'ollama', 'open_ai_api_base': 'http://127.0.0.1:11434/v1', 'claude_api_key': 'YOUR API KEY', 'text_to_image': 'dall-e-2', 'voice_to_text': 'openai', 'text_to_voice': 'openai', 'proxy': '', 'hot_reload': False, 'single_chat_prefix': [''], 'single_chat_reply_prefix': '[bot] ', 'group_chat_prefix': ['@bot'], 'group_name_white_list': ['ChatGPT测试群', 'ChatGPT测试群2'], 'image_create_prefix': ['画'], 'speech_recognition': False, 'group_speech_recognition': False, 'voice_reply_voice': False, 'conversation_max_tokens': 2500, 'expires_in_seconds': 3600, 'character_desc': '你是ChatGPT, 一个由OpenAI训练的大型语言模型, 你旨在回答并解决人们的任何问题,并且可以使用多种语言与人交流。', 'temperature': 0.7, 'subscribe_msg': '感谢您的关注!\n这里是AI智能助手,可以自由对话。\n支持语音对话。\n支持图片输入。\n支持图片输出,画字开头的消息将按要求创作图片。\n支持tool、角色扮演和文字冒险等丰富的插件。\n输入{trigger_prefix}#help 查看详细指令。', 'use_linkai': False, 'linkai_api_key': '', 'linkai_app_code': ''}
[INFO][2024-03-26 12:19:30][config.py:206] - [Config] User datas file not found, ignore.
[WARNING][2024-03-26 12:19:30][audio_convert.py:9] - import pysilk failed, wechaty voice message will not be supported.
/Users/taozhiyu/miniconda3/envs/cow/lib/python3.11/site-packages/pydub/utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
  warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning)
[INFO][2024-03-26 12:19:31][plugin_manager.py:50] - Loading plugins config...
[INFO][2024-03-26 12:19:31][plugin_manager.py:88] - Scaning plugins ...
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin Role_v1.0 registered, path=./plugins/role
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin linkai_v0.1.0 registered, path=./plugins/linkai
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin Dungeon_v1.0 registered, path=./plugins/dungeon
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin BDunit_v0.1 registered, path=./plugins/bdunit
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin Finish_v1.0 registered, path=./plugins/finish
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin Godcmd_v1.0 registered, path=./plugins/godcmd
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin Hello_v0.1 registered, path=./plugins/hello
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin Banwords_v1.0 registered, path=./plugins/banwords
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin Keyword_v0.1 registered, path=./plugins/keyword
chatgpt-tool-hub version: 0.5.0
[INFO][2024-03-26 12:19:31][plugin_manager.py:41] - Plugin tool_v0.5 registered, path=./plugins/tool
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin GODCMD not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin KEYWORD not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin BANWORDS not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin LINKAI not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin TOOL not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin ROLE not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin DUNGEON not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin BDUNIT not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin HELLO not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][plugin_manager.py:123] - Plugin FINISH not found in pconfig, adding to pconfig...
[INFO][2024-03-26 12:19:31][godcmd.py:194] - [Godcmd] 因未设置口令,本次的临时口令为7859。
[INFO][2024-03-26 12:19:31][godcmd.py:210] - [Godcmd] inited
[INFO][2024-03-26 12:19:31][keyword.py:40] - [keyword] {}
[INFO][2024-03-26 12:19:31][keyword.py:42] - [keyword] inited.
[WARNING][2024-03-26 12:19:31][banwords.py:54] - [Banwords] init failed, ignore or see https://github.com/zhayujie/chatgpt-on-wechat/tree/master/plugins/banwords .
[WARNING][2024-03-26 12:19:31][plugin_manager.py:150] - Failed to init BANWORDS, diabled. [Errno 2] No such file or directory: '/Users/taozhiyu/Downloads/chatgpt-on-wechat/plugins/banwords/banwords.txt'
[INFO][2024-03-26 12:19:31][linkai.py:33] - [LinkAI] inited, config={'group_app_map': {'测试群名1': 'default', '测试群名2': 'Kv2fXJcH'}, 'midjourney': {'enabled': False, 'auto_translate': True, 'img_proxy': True, 'max_tasks': 3, 'max_tasks_per_user': 1, 'use_image_create_prefix': True}, 'summary': {'enabled': False, 'group_enabled': True, 'max_file_size': 5000, 'type': ['FILE', 'SHARING']}}
[INFO][2024-03-26 12:19:33][tool.py:28] - [tool] inited
[INFO][2024-03-26 12:19:33][role.py:69] - [Role] inited
[INFO][2024-03-26 12:19:33][dungeon.py:56] - [Dungeon] inited
[WARNING][2024-03-26 12:19:33][bdunit.py:42] - [BDunit] init failed, ignore 
[WARNING][2024-03-26 12:19:33][plugin_manager.py:150] - Failed to init BDUNIT, diabled. config.json not found
[INFO][2024-03-26 12:19:33][hello.py:24] - [Hello] inited
[INFO][2024-03-26 12:19:33][finish.py:23] - [Finish] inited
Ready to login.
Getting uuid of QR code.
Downloading QR code.
You can also scan QRCode in any website below:
https://api.pwmqr.com/qrcode/create/?url=https://login.weixin.qq.com/l/AbsJPmx2cA==
https://my.tv.sohu.com/user/a/wvideo/getQRCode.do?text=https://login.weixin.qq.com/l/AbsJPmx2cA==
https://api.qrserver.com/v1/create-qr-code/?size=400×400&data=https://login.weixin.qq.com/l/AbsJPmx2cA==
https://api.isoyu.com/qr/?m=1&e=L&p=20&url=https://login.weixin.qq.com/l/AbsJPmx2cA==
█▀▀▀▀▀▀▀█▀▀▀▀███▀▀█████▀▀▀▀▀▀▀█
█ █▀▀▀█ ██ ██▄▄▄▀▄▀ ▄██ █▀▀▀█ █
█ █   █ █ ▄▀▄█ █▀ ▀▀█▄█ █   █ █
█ ▀▀▀▀▀ █ █▀▄ █ █▀▄▀▄ █ ▀▀▀▀▀ █
█▀███▀█▀▀ ▀▄▀ ██▄  █▄ ▀▀▀▀▀██▀█
█ ▀▄▄▀█▀█▀▄  ▄ ▄▀ █▄█▄█▄▄▄▄▄▄ █
█ ▄█ █ ▀▀█▄▄▀▀▀▄▀▄ ▄▄ ▄█▄▄▄█ ▄█
██▄▀▀ █▀▄ ▄ ▀▄▄▄▀▄▄ █▀█▄▄   ▄ █
█▄ █ █▀▀▀ ▄█ ▀█ █  ▄█▀ █████ ▄█
█▄█▄▄█▄▀▄█  █ █▀    ██▄ ▄ ▄▀▄ █
█▀▀██ ▄▀  ▀█ ▄█ █▄  █▀▀▀▀ ▀█▄ █
█▀▀▀▀▀▀▀█  ▀▀ █▀ ▄▄ ▀ █▀█ ▀▀█ █
█ █▀▀▀█ █▀▄▀▀ ▀ ▄▄ ▄  ▀▀▀ ▀█▄▀█
█ █   █ ██▀███  ▀▄  █  ███▀▀▀ █
█ ▀▀▀▀▀ █▀ █▀▄▀█▀█ ▀█ ▀▄ █ █ ▄█
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Please press confirm on your phone.
Loading the contact, this may take a little while.
[INFO][2024-03-26 12:20:00][wechat_channel.py:131] - Wechat login success, user_id: @12f6cd5422b294ff066d1f567b42f924b948a7cdc2eb4f7cd5a0cbdf21b6cc55, nickname: 大聪明
Start auto replying.
[INFO][2024-03-26 12:20:09][bridge.py:54] - create bot chatGPT for chat
[INFO][2024-03-26 12:20:09][chat_gpt_bot.py:49] - [CHATGPT] query=hi
[WARNING][2024-03-26 12:20:09][chat_gpt_session.py:85] - num_tokens_from_messages() is not implemented for model get-3.5-turbo. Returning num tokens assuming gpt-3.5-turbo.
[ERROR][2024-03-26 12:20:09][chat_gpt_bot.py:155] - [CHATGPT] Exception: model 'get-3.5-turbo' not found, try pulling it first
Traceback (most recent call last):
  File "/Users/taozhiyu/Downloads/chatgpt-on-wechat/bot/chatgpt/chat_gpt_bot.py", line 123, in reply_text
    response = openai.ChatCompletion.create(api_key=api_key, messages=session.messages, **args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/taozhiyu/miniconda3/envs/cow/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/taozhiyu/miniconda3/envs/cow/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Users/taozhiyu/miniconda3/envs/cow/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/taozhiyu/miniconda3/envs/cow/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "/Users/taozhiyu/miniconda3/envs/cow/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: model 'get-3.5-turbo' not found, try pulling it first
[INFO][2024-03-26 12:20:10][wechat_channel.py:217] - [WX] sendMsg=Reply(type=ERROR, content=[ERROR]
我现在有点累了,等会再来吧), receiver=@a94533c3f716429c87743d1f2e042cd8
taozhiyuai commented 7 months ago

ollama serve用的是QWEN 14B

taozhiyuai commented 7 months ago

ollama serve的log

Last login: Tue Mar 26 12:08:06 on ttys000 taozhiyu@TAOZHIYUs-MBP ~ % ollama serve time=2024-03-26T12:12:37.435+08:00 level=INFO source=images.go:806 msg="total blobs: 5" time=2024-03-26T12:12:37.435+08:00 level=INFO source=images.go:813 msg="total unused blobs removed: 0" time=2024-03-26T12:12:37.436+08:00 level=INFO source=routes.go:1110 msg="Listening on 127.0.0.1:11434 (version 0.1.29)" time=2024-03-26T12:12:37.436+08:00 level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /var/folders/rd/0_kt6h0545g1v749gjmryqqr0000gn/T/ollama197146676/runners ..." time=2024-03-26T12:12:37.462+08:00 level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [metal]" [GIN] 2024/03/26 - 12:20:09 | 404 | 192.917µs | 127.0.0.1 | POST "/v1/chat/completions"

kaina404 commented 7 months ago

你可以修改某一个bot的实现方式,例如:你选择了Gemini模型,然后 gemini/googel_gemini_bot.py 文件中更下为如下代码即可: import ollama

kaina404 commented 7 months ago

        try:
            if context.type != ContextType.TEXT:
                logger.warn(f"[Ollama] Unsupported message type, type={context.type}")
                return Reply(ReplyType.TEXT, None)
            logger.info(f"[Ollama] query={query}")
            session_id = context["session_id"]
            session = self.sessions.session_query(query, session_id)
            # 这里直接调用本地的Ollama服务
            response = ollama.chat(
                model= 'gemma:7b'
                messages=self.filter_messages(session.messages))
            reply_text = response['message']['content']
            self.sessions.session_reply(reply_text, session_id)
            logger.info(f"[Ollama] reply={reply_text}")
            return Reply(ReplyType.TEXT, reply_text)
        except Exception as e:
            logger.error("[Ollama] fetch reply error, may contain unsafe content")
            logger.error(e)
            return Reply(ReplyType.ERROR, "invoke [Ollama] api failed!")
taozhiyuai commented 7 months ago

/> ```python

    try:
        if context.type != ContextType.TEXT:
            logger.warn(f"[Ollama] Unsupported message type, type={context.type}")
            return Reply(ReplyType.TEXT, None)
        logger.info(f"[Ollama] query={query}")
        session_id = context["session_id"]
        session = self.sessions.session_query(query, session_id)
        # 这里直接调用本地的Ollama服务
        response = ollama.chat(
            model= 'gemma:7b'
            messages=self.filter_messages(session.messages))
        reply_text = response['message']['content']
        self.sessions.session_reply(reply_text, session_id)
        logger.info(f"[Ollama] reply={reply_text}")
        return Reply(ReplyType.TEXT, reply_text)
    except Exception as e:
        logger.error("[Ollama] fetch reply error, may contain unsafe content")
        logger.error(e)
        return Reply(ReplyType.ERROR, "invoke [Ollama] api failed!")

感谢帮助.更改如下

from bot.bot import Bot import ollama from bot.session_manager import SessionManager from bridge.context import ContextType, Context from bridge.reply import Reply, ReplyType from common.log import logger from config import conf from bot.baidu.baidu_wenxin_session import BaiduWenxinSession

OpenAI对话模型API (可用)

class GoogleGeminiBot(Bot):

def __init__(self):
    super().__init__()
    self.api_key = conf().get("gemini_api_key")
    # 复用文心的token计算方式
    self.sessions = SessionManager(BaiduWenxinSession, model=conf().get("model") or "gpt-3.5-turbo")

def reply(self, query, context: Context = None) -> Reply:
    try:
        if context.type != ContextType.TEXT:
            logger.warn(f"[Ollama] Unsupported message type, type={context.type}")
            return Reply(ReplyType.TEXT, None)
        logger.info(f"[Ollama] query={query}")
        session_id = context["session_id"]
        session = self.sessions.session_query(query, session_id)
        # 这里直接调用本地的Ollama服务
        response = ollama.chat(
            model = 'gemma:7b',
            messages=self.filter_messages(session.messages))
        reply_text = response['message']['content']
        self.sessions.session_reply(reply_text, session_id)
        logger.info(f"[Ollama] reply={reply_text}")
        return Reply(ReplyType.TEXT, reply_text)
    except Exception as e:
        logger.error("[Ollama] fetch reply error, may contain unsafe content")
        logger.error(e)
        return Reply(ReplyType.ERROR, "invoke [Ollama] api failed!")

def _convert_to_gemini_messages(self, messages: list):
    res = []
    for msg in messages:
        if msg.get("role") == "user":
            role = "user"
        elif msg.get("role") == "assistant":
            role = "model"
        else:
            continue
        res.append({
            "role": role,
            "parts": [{"text": msg.get("content")}]
        })
    return res

@staticmethod
def filter_messages(messages: list):
    res = []
    turn = "user"
    if not messages:
        return res
    for i in range(len(messages) - 1, -1, -1):
        message = messages[i]
        if message.get("role") != turn:
            continue
        res.insert(0, message)
        if turn == "user":
            turn = "assistant"
        elif turn == "assistant":
            turn = "user"
    return res

错误信息如下

截屏2024-03-27 08 33 40 截屏2024-03-27 08 36 15

config.json配置如下 "model": "gemini",

"gemini_api_key": "lm-studio",

"open_ai_api_key": "lm-studio",

"open_ai_api_base": "http://localhost:1234/v1",

@kaina404

taozhiyuai commented 7 months ago

用lm studio做SERVER就没那么复杂.改api 和base就可以了.

kaina404 commented 7 months ago

我给出了我本地的修改方案,以支持Ollama服务。https://github.com/zhayujie/chatgpt-on-wechat/issues/1845 @taozhiyuai

taozhiyuai commented 7 months ago

[WARNING][2024-03-29 07:08:37][plugin_manager.py:150] - Failed to init SUMMARY, diabled. [Summary] init failed, not supported bot type

@kaina404

这个插件不支持bot type.麻烦处理下.谢谢