-
I am using the semantic_router.encoders.AzureOpenAIEncoder and semantic_router.llms.AzureOpenAILLM.
When trying to reproduce the examples from docs/02-dynamic-routes.ipynb, but with those encoder/…
-
Hello, first of all thank you for the great work.
I have a question regarding these three questions below. How are these questions considered harmful? When I tried to give these questions to Claude…
-
Claude 2 looks interesting
-
[Ollama](https://ollama.com/) is a fantastic tool, that enables user to run freely available LLMs locally and chat with them via the command line. They regularly update what LLMs are available (llama3…
-
Currently, TRIDENT is only based on the OpenAI API as LLMs.
The OpenAI API is difficult to freely customize and control the base model.
Supporting FOSS LLMs should make it possible to develop TRIDEN…
-
It's not clear in the repo readme how I can use the FastChat UI to compare multiple LLMs on my local machine.
I have these models served via FastAPIs and running on my local server.
Can anyone…
-
# Alex Strick van Linschoten - My finetuned models beat OpenAI’s GPT-4
Finetunes of Mistral, Llama3 and Solar LLMs are more accurate for my test data than OpenAI’s models.
[https://mlops.systems/pos…
-
explore the [WebGPT](https://github.com/0hq/WebGPT) library as a potential (seems full?) replacement for TFJS.
try on small standard architecures like nanoGPT
we can explore this in parallel to o…
-
**Describe the bug**
Once ragas is installed I want to import it and I got an error on the import of pydantic output parser from langchain.
Ragas version: 0.1.6
Python version: 3.10
LangChain vers…
-
Would it be possible for us to use Huggingface or vLLM for loading models locally. Ollama implantation bit more challenging