-
### Description of the bug:
when i use `gemini-1.5-flash` or `gemini-1.5-pro` i have no problem asking for json return like so
```python
model = genai.GenerativeModel("gemini-1.5-flash",system_ins…
-
## **Implement Conversation History-Aware RAG Solution 🚀**
### **Project Overview**
We are looking to enhance our RAG (Retrieval-Augmented Generation) system with conversation history awareness. …
-
### Feature request
Implement the new feature to support a pipeline that can take both an image and text as inputs, and produce a text output. This would be particularly useful for multi-modal tasks …
-
Congratulations on this wonderful work!
The readme file suggests preparing test_q.json and test_a.json for evaluation on the MVSD-QA dataset, but on the official website of the MVSD-QA dataset, I c…
-
Fantastic job!
I am wondering how to generate questions with answer given using the pretrained model (either prepend or highlight), i.e,
```
nlp = pipeline("question-generation", model="valhalla…
-
Does anyone know how to prompt data generation? I would need specific sentence structures.
I tried to reconfigure the prompt itself, but it did not change the generated data sentence by sentence. Do…
-
### System Info
tgi-gaudi 2.0.4
Used below docker compose yaml to launch tgi-gaudi
Serve **llama3.1-70B-instruct model**
--top_k 10
--max_new_tokens 8192
--temperature 0.01
--top_p 0.95
…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature Description
# Description:
Implement a RAG-based URL Analyzer for Retro to allow users to input artic…
-
We propose to integrate a chatbot into the IMAGINE - AI website to enhance user interaction and support. The chatbot will provide users with a more intuitive and engaging experience by assisting them …
-
**Describe the bug**
I want to use local llms to evaluate my rag app, I have tried Ollama and HuggingFace models but neither of them is working.
Ragas version: 0.1.11
Python version: 3.11.3
**…