snexus / llm-search

Querying local documents, powered by LLM
MIT License
510 stars 60 forks source link

What Hardware was the .gif ran on #119

Closed chozillla closed 1 month ago

chozillla commented 1 month ago

Hi,

Not an issue, but a question. I was curious on what was the exact hardware you had when you ran that demo with the .gif. I am running on a potato of a Mac Pro 2017, and would use CPU.

Thanks!

snexus commented 1 month ago

Hi,

i ran it on Nvidia 3060 with 10GB of VRAM, using local models with up to 8B parameters (same limitations apply as to any other software using local models).

If you are using off-the shelf model such as OpenAI, hardware requirements are much more modest, as you need it only for the embeddings part. Even there, you can choose an embeddings model that will run OK on cpu.

chozillla commented 1 month ago

Awesome, will try it out! Thanks!