-
### What is the issue?
llama runner process has terminated: signal: segmentation fault (core dumped)
this is the error I am getting every time I try to load that particular model.
all other mod…
-
I got a Problem when I want to start LLMAnything locally. But after I execute "docker-compose up -d --build" I got this Error ..
"287.5 nodejs
288.0 0 upgraded, 1 newly installed, 0 to remove a…
-
Hi Timothy, we have been testing the full “**Anything LLM Chatbot**” application as it matches perfectly with our idea, but we have found many adjustments to be solved. Some **issues, UX changes, UI s…
-
Add support for the owner of an AnythingLLM instance to create sub-users who can have their own workspaces.
Sub-Users can add and use all documents in the workspace and passwords. The root account ca…
-
Hi,
I saw that there are local LLM:s in the roadmap. It won't make much sense if you don't also use local sentecetransformers (MTEB) like instructor or e5.
I would suggest to have this looked in…
-
Please help!
I did a anything-llm docker installation in multi-user mode, I want to upgrade the latest version. It is installed on my server that is exposed to the internet with its own public domain…
-
I am trying out this code, and kudos for the YouTube introduction and the effortless onboarding. I have installed, configured, and uploaded one 90-page manual in around 20 minutes.
I am hooked into…
-
AnythingLLM should ship with a full developer-focused backend API so that you can programmatically run queries, chats, and other commands that you would be able to run via the frontend.
-
Hi,
I'm trying to deploy this in my local using the instructions provided in the documentation. Once the setup is done I tried to execute these commands yarn dev:server and yarn dev:frontend. Recieve…
-
Worked before. But after a complete new installation the dockerized repo shows the following:
? What kind of data would you like to add to convert into long-term memory? Article or Blog Link(s)
? …