Closed teknightstick closed 1 year ago
Sorry, with the above information I cannot help you. You will need to debug this yourself, with more logging. You do have a mattermost token defined, don't you?
it looks like its trying
What makes you think so? Can you share the logs?
Sorry, but I don't think I will be maintaining the llama branch. I may accept PRs, though.
You won't be able to use the ":latest" tag with the LLama server. Instead, check out the llama branch, and build the image yourself, locally.
Yes i have all the correct tokens inputted i just didnt post them here. I will reserach how to build my own image today I appreciate the help.
How to build an image is actually part of the README. And you don't even need to do that if you have node js installed, you can just run the process from the shell.
docker build . -t yguy/chatgpt-mattermost-bot
What would I put at the end of this to have it pull the llama branch?
To preface that. I have very little understanding of docker but I am trying to learn where I need to. It looks like your llama branch actually installs lama in the same container. I am looking for an option to basically replace openai api with a local (lan) llama api i have on another computer entirely of my choosing.
There is an option to specify the running server. You don't need to use the docker file in the repo if you have your own. Sorry, this is alpha release quality and if this sounds all greek to you, then you are on your own.
I am keeping this open in case someone wants to help, but it's probably not going to be me in the near future.
Hi, I'm trying to use this bot with LocalAI. The API should be compatible with OpenAI, but the bot has no environment variable to set a different base URL.
The OpenAI "integration" is absolute minimal. Just patch it if you want to use something else:
Since 2.0 we are using openai "function" calls, which require a deeper integration/more implementation with alternative LLM APIs. I am closing this for now.
I have a llama server running on another local server in the house. I have my mattermost stack running on a synology server. I was able to get the openai bot into mattermost and its working great. I would like to add another bot into the mattermost stack but reaching out to the other localmachine running the dalai server.
I noticed you have 2 branches a main and a llama branch. I am port forwarding correctly for port 3000.
chatgpt: image: ghcr.io/yguy/chatgpt-mattermost-bot:latest container_name: chatgpt environment: MATTERMOST_URL: 'https://**' MATTERMOST_TOKEN: '*' OPENAI_API_KEY: '***' MATTERMOST_BOTNAME: '@chatgpt' DEBUG_LEVEL: 'TRACE' NODE_ENV: 'production' restart: always
llamagpt: image: ghcr.io/yguy/chatgpt-mattermost-bot:latest container_name: llamagpt environment: MATTERMOST_URL: '***' MATTERMOST_TOKEN: '****' DALAI_SERVER_URL: 'http://ipofcomputerrunningdalaiserve:3000' MATTERMOST_BOTNAME: '@llamagpt' DEBUG_LEVEL: 'TRACE' NODE_ENV: 'production' restart: always