Leon-Sander / Local-Multimodal-AI-Chat

GNU General Public License v3.0
136 stars 83 forks source link

CPU ONLY MODE #22

Closed NeevrajKB closed 7 months ago

NeevrajKB commented 7 months ago

Hi Leon! PS C:\Users\Admin> & C:/Users/Admin/AppData/Local/Programs/Python/Python311/python.exe c:/Users/Admin/Desktop/local_multimodal_ai_chat-main/local_multimodal_ai_chat-main/llm_chains.py Traceback (most recent call last): File "c:\Users\Admin\Desktop\local_multimodal_ai_chat-main\local_multimodal_ai_chat-main\llm_chains.py", line 20, in
def create_llm(model_path = config["ctransformers"]["model_path"]["large"], model_type = config["transformers"]["model_type"], model_config = config["ctransformers"]["model_config"]):


KeyError: 'ctransformers'

Any specific changes required for CPU only inference? I don't have a GPU...
Leon-Sander commented 7 months ago

No changes required for CPU inference, the config file in this repository is in a state which runs everything on CPU if not defined otherwise by yourself.

NeevrajKB commented 7 months ago

Any reason for the error then? All I want to create is a very basic PDF rag with the bigscience bloom base (2-3gb) model. Only reason being its a very lightweight model trained on great literary data. But I cant figure it out... Thanks Leon!

Leon-Sander commented 7 months ago

Well you have a keyerror. Lets look into the config file as it is in this repository:

ctransformers:
  model_path:
    small: "./models/mistral-7b-instruct-v0.1.Q3_K_M.gguf"
    large:  "./models/mistral-7b-instruct-v0.1.Q5_K_M.gguf"

  model_type: "mistral"
  model_config: 
    'max_new_tokens': 256
    'temperature' : 0.2
    'context_length': 2048
    'gpu_layers' : 0 # 32 to put all mistral layers on GPU, might differ for other models
    'threads' : -1

chat_config:
  chat_memory_length: 2
  number_of_retrieved_documents: 3

If one would want to access the model path after the config file has been loaded, one would write

config["ctransformers"]["model_path"]["large"]

Now we have used the keys "ctransformers", "model_path" and "large".

Your error suggests, that you dont have the key "ctransformers" in your config file, so I guess you have changed it to something else in the config file, but did not change the key in the create_llm function and possibly somewhere else.

Sidenote: The last time I checked, ctransformers seemed to not get updates anymore, so it might happen, that not all models work with it, guess you have to test it or try to load it in another way

NeevrajKB commented 7 months ago

Hi! Thanks for the answer, sadly my model does not seem to work with ctransformers & since I just need a basic RAG, I'm following this video of on your channel. https://youtu.be/Fqr1zwdPbAM?si=VXUwhI5diORFXyO_ Document Retrieval with Local LLMs for FREE (Search Whole Books) Thanks again man! No other youtuber helps out on issues or questions so friendlily. Thanks Leon!