-
**Is your feature request related to a problem? Please describe.**
reading documents can be a bit boring, it document is of 10-12 pages.
**Describe the solution you'd like**
to get the idea about…
-
### Describe your problem
the llm first answers question about the lack of relevant knowledge in the knowledge base, but later it still answers some irrelevant content
![微信截图_20241124154836](https:/…
-
### Describe your problem
I am looking for a way to query across multiple document chunks. I have a CSV sample (
[employee_data.csv](https://github.com/user-attachments/files/16979591/employee_data.…
-
### Your current environment
vllm==0.2.7
### How would you like to use vllm
Is extractive question answering possible with VLLM batched inference? Here is an example: https://yonigottesman.github.i…
-
### System Info
- `transformers` version: 4.44.2
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate vers…
-
![image](https://github.com/user-attachments/assets/c1b01db9-9139-49eb-8f2a-338524b5288f)
-
### Title
A quantum-inspired sentiment representation model
### Team Name
Noah
### Email
202311016@daiict.ac.in
### Team Member 1 Name
Harsh Vyas
### Team Member 1 Id
20231…
-
# Task Name
Answering a Spoken Question given a spoken document
## Task Objective
The goal of QA is to find the answer span in a spoken document given a spoken question. The answer span is de…
-
Hello, I have now implemented the functionality of generating a knowledge graph from a provided file for question answering. I have a question now: if I want to combine the generated knowledge graph w…
-
When I use this code, I encounter an error that I can't resolve.
metric = evaluate.load(
"squad_v2" if data_args.version_2_with_negative else "squad", cache_dir=model_args.cache_dir
)
…