yGuy / chatgpt-mattermost-bot

A very simple implementation of a service for a mattermost bot that uses ChatGPT in the backend.
MIT License
150 stars 48 forks source link

Mattermost stack with llama running on separate local lan server. #13

Closed teknightstick closed 1 year ago

teknightstick commented 1 year ago

I have a llama server running on another local server in the house. I have my mattermost stack running on a synology server. I was able to get the openai bot into mattermost and its working great. I would like to add another bot into the mattermost stack but reaching out to the other localmachine running the dalai server.

I noticed you have 2 branches a main and a llama branch. I am port forwarding correctly for port 3000.

chatgpt: image: ghcr.io/yguy/chatgpt-mattermost-bot:latest container_name: chatgpt environment: MATTERMOST_URL: 'https://**' MATTERMOST_TOKEN: '*' OPENAI_API_KEY: '***' MATTERMOST_BOTNAME: '@chatgpt' DEBUG_LEVEL: 'TRACE' NODE_ENV: 'production' restart: always

llamagpt: image: ghcr.io/yguy/chatgpt-mattermost-bot:latest container_name: llamagpt environment: MATTERMOST_URL: '***' MATTERMOST_TOKEN: '****' DALAI_SERVER_URL: 'http://ipofcomputerrunningdalaiserve:3000' MATTERMOST_BOTNAME: '@llamagpt' DEBUG_LEVEL: 'TRACE' NODE_ENV: 'production' restart: always

I am not getting a response but it looks like its trying. 

is there anything else 

what am i doing wrong? 

PS i did ask about the diagram bot and am excited.   Thank you for everything you do. 
yGuy commented 1 year ago

Sorry, with the above information I cannot help you. You will need to debug this yourself, with more logging. You do have a mattermost token defined, don't you?

it looks like its trying

What makes you think so? Can you share the logs?

Sorry, but I don't think I will be maintaining the llama branch. I may accept PRs, though.

yGuy commented 1 year ago

You won't be able to use the ":latest" tag with the LLama server. Instead, check out the llama branch, and build the image yourself, locally.

teknightstick commented 1 year ago

Yes i have all the correct tokens inputted i just didnt post them here. I will reserach how to build my own image today I appreciate the help.

yGuy commented 1 year ago

How to build an image is actually part of the README. And you don't even need to do that if you have node js installed, you can just run the process from the shell.

teknightstick commented 1 year ago

docker build . -t yguy/chatgpt-mattermost-bot

What would I put at the end of this to have it pull the llama branch?

teknightstick commented 1 year ago

To preface that. I have very little understanding of docker but I am trying to learn where I need to. It looks like your llama branch actually installs lama in the same container. I am looking for an option to basically replace openai api with a local (lan) llama api i have on another computer entirely of my choosing.

yGuy commented 1 year ago

There is an option to specify the running server. You don't need to use the docker file in the repo if you have your own. Sorry, this is alpha release quality and if this sounds all greek to you, then you are on your own.

I am keeping this open in case someone wants to help, but it's probably not going to be me in the near future.

ateuber commented 1 year ago

Hi, I'm trying to use this bot with LocalAI. The API should be compatible with OpenAI, but the bot has no environment variable to set a different base URL.

yGuy commented 1 year ago

The OpenAI "integration" is absolute minimal. Just patch it if you want to use something else:

https://github.com/yGuy/chatgpt-mattermost-bot/blob/06f41d0216e56301131d092cac78a6357f627182/src/openai-thread-completion.js#L12

yGuy commented 1 year ago

Since 2.0 we are using openai "function" calls, which require a deeper integration/more implementation with alternative LLM APIs. I am closing this for now.