-
**Link to the notebook**
[Fine-tune LLaMA 2 models on SageMaker JumpStart](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart-foundation-models/lla…
-
Hi!
Could you please provide guidelines on using the container, especially for the Module 2 and 3?
I see that you have provided instructions for predictions in the Readme, but not for fine-tuning…
-
After several unsuccessful attempts at fine-tuning where the output was a still frame of noise or a green field, I followed instructions and skipped to the inference to test the base model. It reacted…
-
## ざっくり言うと
GPT-3などのzero-shotで使われているpromptingの考えと、pretrain-finetuneの考えを組み合わせた"instruction tuning"を提案した。"instruction tuning"は入力文内にタスク内容の説明文を含める学習方法で、タスクの説明文からその問題の解き方を学習させたいという意図がある。結果としてzero-shotの精度を向…
-
### Feature request
The main feature request involves a New Trainer Subclass, similar to Seq2SeqTrainer, but suitable for Decoder-Only LM.
### Motivation
`Seq2SeqTrainer` provides a great abstracti…
skpig updated
2 months ago
-
Hi!
some of the current mod incompatibilites seem to be:
Body parts added by other mods, - Will upload try to edit or create seperate issue when i have the logs for this one.
Black screen wor…
-
What are the steps for tuning the controller? I have a few minutes to tune the controller before the drone's battery dies, so that would be really helpful if you give some instructions on it.
Thank Y…
-
I am reproducing the result using the instruction provided in the README file.
I was able to train the base model and obtain AP of 0.6862, which matches what the paper reports. However, when I trie…
-
Hi guys, thanks for your great repo.
I want to ask some question
1. What is the similarity distribution of model when I set temperature = 0.02? Previously, I saw you say that when temperature=0.01, …
-
Based on the "WizardLM/WizardCoder-15B-V1.0" model, I used 78533 pieces of data to fine-tune the instructions. The dataset format is as follows:
![image](https://github.com/nlpxucan/WizardLM/assets…