Mintplex-Labs / anything-llm

The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
https://anythingllm.com
MIT License
24.03k stars 2.41k forks source link

[DOCS]: AnythingLLM for desktop beta - folder for storing downloaded LLAMA GGUF LLMs from HuggingFace #712

Closed BharatBlade closed 7 months ago

BharatBlade commented 7 months ago

Description

I see documentation for where to store gguf models here: https://github.com/Mintplex-Labs/anything-llm/blob/master/server/storage/models/README.md

saying that "/server/storage/models/downloaded is the default location that your model files should be at". However the directory structure for the anythingllm-desktop installation is understandably very different from the source/dev directory structure in GitHub. I'm very confused where I'm supposed to put models for the "Native" LLM Preference. What I'm referring to specifically is neither a dev-build nor a docker build. I'm referring to the very exciting "AnythingLLM for desktop" public beta.

Any help at all is greatly appreciated!

I can't find a similar folder for where to put downloaded gguf models so I just get the message "-- waiting for models --" image

For reference, this is the directory structure/files I'm looking at in the anythingllm-desktop installation: image

image

timothycarambat commented 7 months ago

We are going to deprecate Native LLM support for AnythingLLM desktop next update because the node-llama-cpp does not pre-build for the underlying operating system which seems to cause problems with inferencing. It should have been removed from the beta but that was overlooked.

We would really recommend using LMStudio, LocalAI or Ollama for LocalLLM support as their build and interface is much more stable 👍