-
when i promt tuning the weight file reduced from 3.5GB to 800M. Is that normal?
ccsvd updated
3 months ago
-
Clarification: We don't need this module to identify new jargon terms, but it needs to:
1. identify terms/phases with small edit distance from the jargon terms we have, and
2. determine if a jargon te…
-
If it is just an implementation of existing methods, which is not novel, why the conference of p tuning paper is top CCF-A and the paper is widely cited?
So I wonder what is the core difference bet…
-
Hi,
We want to make a submission for the leaderboard with our fine-tuned model used in www.jdoodle.com
It seems like for some of the humaneval questions, our model getting the logic right but we…
-
Hi,
I have tried to run the notebook example you have published (without editing) in colab but it doesn't work...
I get the following error:
```
RuntimeError Traceback…
-
Hi,
Very interesting work!
In your code, I wonder why the text encoder takes the composition of learnable and non-learnable embedding of [X, X, X...] as the input. Is it the conventional setting …
-
Hello, mr. adrianzzk, I have a question about how to use my point prompts.
I think there is no description or dataset-preparing for prompts.
So, I want to ask can i use my point prompts for fine-tun…
-
It's a long-term issue.
Parameters for tuning:
- System message
- Document splitter chunk size
- Document splitter overlap size
- RAG max results
- RAG max rank
- Prompt template
-
# Background
Currently, we have no early stopping added to the prompt tuning module, but we should! This issue covers adding it.
### Acceptance Criteria
- [ ] Prompt tuning can be run with early …
-
### System Info
Tensorrt-LLM commit: 2a115dae84f13daaa54727534daa837c534eceb4
TensorRT-LLM version: 0.11.0.dev2024061800
### Who can help?
_No response_
### Information
- [X] The official exam…