kennethleungty / Llama-2-Open-Source-LLM-CPU-Inference

Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
https://towardsdatascience.com/running-llama-2-on-cpu-inference-for-document-q-a-3d636037a3d8
MIT License
947 stars 210 forks source link

config customization #20

Open AleksandrTulenkov opened 1 year ago

AleksandrTulenkov commented 1 year ago

Hi! Thanks - awesome job.

I have a question - why changing config (bigger chunks, vector counts) lead to broken output? For example: VECTOR_COUNT: 3 CHUNK_SIZE: 600 CHUNK_OVERLAP: 50 Gives me non logic output