rmusser01 / tldw

Too Long, Didn't Watch(TL/DW): Your Personal Research Multi-Tool - Open Source NotebookLM
Apache License 2.0
45 stars 2 forks source link

Improvement: Add functionality to start & query local LLM server, with X model loaded by default #5

Closed rmusser01 closed 1 month ago

rmusser01 commented 2 months ago

A user should be able to run the application and be able to have the LLM endpoint be started and queried by the script itself, without interaction from the user. This would help support batch usage.

Ideally this would be achieved through using Ollama and Llama.cpp, with the option available as part of the CLI.

rmusser01 commented 1 month ago

llama.cpp + recommendations for llama3-8B & MS Phi-3 128k context as offline models most people can run. Otherwise, Mixtral 7x22B & Llama3 70B.

Using the chat available at https://gpt.h2o.ai/ , one can compare the summarizations next to each other.