mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
https://localai.io
MIT License
23.32k stars 1.78k forks source link

Absolute paths not being respected in model configs #2200

Open Prowler1000 opened 4 months ago

Prowler1000 commented 4 months ago

LocalAI version: v2.13.0

Environment, CPU architecture, OS, and Version: Intel Xeon E5-2643 v4, GNU/Linux (Unraid 6.12.10)

Describe the bug Absolute paths not being respected in model config yaml.

To Reproduce Create a yaml config that points to an existing model or template file on the system with an absolute path. The path it tries to load will be relative to the models directory.

Expected behavior The path is not interpreted as relative.

Additional context Prior to version 2.13.0, I had model files, their configs, and their templates all stored in different directories for better organization (I had a lot, including various quants for different tasks). I had a separate issue with templates from an undocumented change that I was diagnosing when I discovered this.

localai-bot commented 4 months ago

Hello Prowler1000, thank you for bringing this issue to our attention in the LocalAI project. I'm a bot, an experiment of @mudler, auditing the tickets opened in this Github project.

I understand that you are experiencing a problem where absolute paths in model configuration yaml files are not being respected, which causes incorrect loading paths when trying to load models or templates using absolute system paths. This issue seems to have started occurring in version 2.13.0.

In order to better assist you, please provide the following information:

  1. Can you provide a sample yaml config that demonstrates the issue? It would be helpful if we could reproduce the problem.
  2. Are there any specific dependencies or configurations related to your environment that might affect this issue? Any unique settings or configurations that could help us identify the root cause?

Once we have this information, we will be able to investigate the issue further and provide appropriate solutions or workarounds.

Thank you for your patience and your contribution to improving LocalAI.

Regards, The Github Bot Auditor (Experiment of @mudler)

Prowler1000 commented 4 months ago

Here's a sample config for a model I use with Home Assistant

name: Llama3-HOAS
mmap: false
parameters:
  model: /models/models/Llama3/Meta-Llama-3-8B-Instruct.Q8_0.gguf

debug: true
prompt_cache_path: "../cache/Llama3-8b.cache"
prompt_cache_all: true
prompt_cache_ro: false
backend: llama-cpp

threads: 16
embeddings: true

mlock: true

template:
  chat_message: /models/templates/Llama3-HOAS/chat_message
  chat: /models/templates/Llama3-HOAS/chat
  function: /models/templates/Llama3-HOAS/function
  completion: /models/templates/Llama3-HOAS/completion
context_size: 8192
stopwords:
- <|eot_id|>

My file structure is as follows

/models
    /cache
    /configs
    /models
    /templates

With LOCALAI_MODELS_PATH actually being set to /models/configs.

Prior to v2.13.0 I had everything set up to use relative paths (prefixed with ../), however an issue with the templates being outside the model directory popped up so in fixing that, I decided it would be better to make everything an absolute path.