-
!pip install transformers datasets
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
from datasets import load_dataset, load_metric
tokenizer = GPT2Tokenizer.from_…
-
This is a thread for Carolina to summarize her research on RAG.
The purpose is to share the information among project members.
-
### System Info
cargo version
cargo 1.80.1 (376290515 2024-07-16)
Haven't been able to run the docker file to get more details..
I am trying to run the docker on CPU
### Information
- [X] Docke…
-
Currently, the LODhtmlparser does not handle the discovery of `` tags within text/HTML content, which may contain references to `application/ld+json` or `text/turtle` data. To improve the parser's c…
-
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
**Your Question**
How can I customize data generation response of Data G…
-
Using the v0.8 version of [ChatQnA example](https://github.com/opea-project/GenAIExamples/blob/v0.8/ChatQnA/docker/gaudi/compose.yaml), the tgi service fails with heath test.
Environment:
- OS: ub…
-
!pip install transformers datasets
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments
from datasets import load_dataset, load_metric
from transformers import GPT2LMH…
-
Determine the best chunk size for our application.
The application should be able to keep track of individuals, so that RAG can make an appropriate prompt to feed to LLM, and the application can gi…
-
### System Info
- `transformers` version: 4.45.0.dev0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.4
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Acceler…
-
### System Info
Hi everyone,
i was trying out to run quantized Llama with dockerized TGI and run into issues. First, I tried AWQ with [Llama 3.1 70B](https://huggingface.co/hugging-quants/Meta-Lla…