-
**Describe the bug**
Once ragas is installed I want to import it and I got an error on the import of pydantic output parser from langchain.
Ragas version: 0.1.6
Python version: 3.10
LangChain vers…
-
Hey folks, I saw in the readme that you're interested in adding additional LLMs once you have confidence that they'll work well. Is there a method that you would use to determine if an LLM is working?…
-
When attempting to use the `OpenAI` class from `llama_index.llms.openai` and providing running the example from this repo, on this line:
```
agent1 = ReActAgent.from_tools([tool], llm=get_tool_llm…
-
Would it be possible for us to use Huggingface or vLLM for loading models locally. Ollama implantation bit more challenging
-
Can you offer support for the ALTI attribution method for LLMs such as LLAMA?
-
I need to use locally deployed LLMs for evaluation within my current setup. While setting up LLM monitoring using Phoenix, I require evaluations with the traces, I am only able to find [evaluation llm…
-
### Feature Description
I just deployed my first Azure ML Studio Serverless endpoint and I had to realise, that there is no matching LLM type in llama_index. Am I missing something? Or does llama_ind…
-
Adding support for local models (ex. through llama.cpp) would make this project even more impactful. Many local models, especially at high parameter counts, come pretty close to ChatGPT 3.5 Turbo, so …
-
Hey, I'm trying to use my LLM on vLLM server which is exposed as an API.
Usually, I create an openai LLM instance with Langchain like below, and it works fine.
```
import openai
from langchain.llm…
-
Thanks a lot to you guys for this project! It gives a very easy way to add AI functionality to already created apps.
But I've noticed that there are Azure and OpenAI connectors for audi…