-
## Description
As a user of prompt tuning, I want to be able to leverage multiple GPUs at train time!
## Discussion
Extends https://github.com/caikit/caikit-nlp/issues/175 to leverage PyTorch…
-
Focus on evaluating the effectiveness of the whole framework against prompt injection attacks
in terms of ai agent:
we have user, cam, lidar, pos as perception, pretrained LLM as brain, command signa…
-
Context
**What are you trying to do and how would you want to do it differently? Is it something you currently you cannot do? Is this related to an issue/problem?**
**Answer:** Trying to model t…
-
Hi! I'm encountering an issue while tuning phi-3 on long sequences with batch sizes greater than 1. Below is the code to reproduce the problem:
**Working Code:**
```python
tokenized = tokenizer(
…
-
reference:
1. Wu H, Shi X. Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Lo…
-
Hi!
When I queue an image for the first time it takes significantly longer than subsequent requests. It seems like the issue is related to applied providers. It shows antelopev2 and buffalo_l in th…
-
I am not understanding the conceptual usefulness of masking out the prompt.
I have seen that there is a comment in scripts/prepare_alpaca.py that says:
`mask_inputs: bool = False, # as in alpac…
-
Hello,
We successfully fine-tuned the Mistral7b_v0.3 Instruct model using a single GPU, but we encountered issues when trying to utilize multiple GPUs.
The successful fine-tuning with one GPU (A…
-
Hello
I want to learn and contribute to this great work, so I run the example to learn more about it, but when I run [prompt-tuning](https://github.com/bigscience-workshop/petals/blob/main/examples…
-
“并且在数百种任务上进行Prompt任务式训练。” 这里的数百种任务的数据会开源吗?