nomic-ai / gpt4all

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
https://nomic.ai/gpt4all
MIT License
69.75k stars 7.63k forks source link

Python Bindings: Model no longer kept in cache #2354

Open woheller69 opened 4 months ago

woheller69 commented 4 months ago

Bug Report

Just compiled the updated Python bindings V2.7.0 When terminating my GUI now the whole model needs to be loaded again which may take a long time. In previous versions only the first start took long, subsequent starts with the same model were fast.

Steps to Reproduce

Use CLI:

python3 app.py repl --model dolphin-2.7-mixtral-8x7b.Q4_K_M.gguf

/exit

python3 app.py repl --model dolphin-2.7-mixtral-8x7b.Q4_K_M.gguf

-> model loads again

Expected Behavior

At CLI restart the model should already be in cache

Your Environment

I uninstalled V2.7.0 and downgraded to V2.6.0 and cache works again

woheller69 commented 4 months ago

This does not happen with smaller models, such as Llama 3 8B Instruct Q8 which is 8.5GB in size. Dolphin 2.7 Mixtral 8x7b Q4_K_M is 26 GB.

I have 36 GB of RAM so this should not be a problem and worked perfectly in 2.6.0

On the resources monitor the behaviour is also strange. It first fills cache and then moves data from cache to memory

Loading to cache: Screenshot from 2024-05-16 21-30-05 Moving from cache to memory: Screenshot from 2024-05-16 21-30-18

For the smaller model just cache increases (fully loaded): Screenshot from 2024-05-16 21-33-32

woheller69 commented 4 months ago

With v2.6.0 Dolphin 2.7 is held in cache and reloads quickly: Screenshot from 2024-05-17 07-50-40 I notice the same with llama-cpp-python. Has there been a degradation in llama.cpp ?