-
![image](https://github.com/user-attachments/assets/222b56eb-f871-4097-bde3-36cfd3c0b4ef)
-
Given that an LLM is an evolving system, reporting the prompts might not be enough: different versions are likely to return different answers. For complete transparency a researcher, when possible, sh…
-
Please add an extension to use Groq LLM
https://groq.com/
Groq has the same interface as OAI and is faster and cheaper.
-
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
Software Version Info
…
-
**Is your feature request related to a problem? Please describe.**
I am trying to rerun the LLM model if the generation is hallucinated, but I am getting a circular dependency error. Is there a way to…
-
LLM are now really good at language, being able to have a hotkey to send a prompt with the word (or the sentence) would be nice.
I can imagine this working with a local text file representing the…
-
The soloution to a given task should be able to go through Llama to be displayed as "feedback"
- [x] Feedback is displayed in the feedback div
- [ ]
-
Is it possible to use local models or are there any plans for that to happen? For example, using models from Hugging Face like the meta-llama/Llama-3.2-11B-Vision.
-
Right now folks that clone this repo are likely to get errors when running the bot unless they happen to have Ollama running at a specific IP address. Remove that hardcoded address, and bypass calls t…
-
how to set base_url and model in python sdk?