-
I tried the documentation:
llm:
api_type: 'openrouter'
base_url: 'https://openrouter.ai/api/v1'
api_key: 'sk...'
model: meta-llama/llama-3-70b-instruct:nitro
Then I got this issu…
-
Retrieval augmented generation (RAG) is technique to enrich LLMs with own data. It has become very popular as it lowers the complexity entry to enriching input in LLM apps, allows for better access co…
-
**Is your feature request related to a problem? Please describe.**
Get better results from user input with interpreted responses.
**Describe the solution you'd like**
Integrate an LLM in Alep…
-
### Issue Description
Currently, the "Visuals" section has the tutorials "Long Running Callbacks" "Real-Time Data Visualization with Multithreading" and "Managing Multiple Users". The "Fundamentals" …
-
### Current Behavior
Using the default onnx model,
**Score function**
```
def get_score(a, b):
return evaluation.evaluation(
{
'question': a
},
{
…
-
Hey, I am trying to run RAG pipeline privately with Ollama.
Here are config details:
1. Llamaindex: ^0.3.14
2. Ollama for Windows
3. LLM: Llama3
4. Embedding model: Hugging face model XENOVA_AL…
-
### Bug Description
We are getting following error when we use dense_x with elasticsearch especially when using 70+ pages :
```
raise self._make_status_error_from_response(err.response) from…
-
Thank you for making this research available on github.
"Our design exploits the fact that while LLMs are known to have been trained on the raw text of American case law, which is in the public do…
-
- [ ] Metadata extraction - HM
- [ ] Answering introductory questions - GK
- [x] Answering Open Chat questions - moved to genai-apps/aggrag#36
-
https://www.anyscale.com/blog/a-comprehensive-guide-for-building-rag-based-llm-applications-part-1
Summary
Excited to share our production guide for building RAG-based LLM applications where we brid…