-
我们如何针对扩展区块微调?
-
HI!
1) Please put EXAMPLE Lora training with dataset and EXAMPLE lora using it. Thanx a lot.
2) Will it train only lora with train.py?
3) Whats the size of lora and how much epochs to train?
4) H…
-
Hi Thanks for sharing the code. I wonder how can we obtain the tokenizer. Is it also from GPT-2: https://huggingface.co/openai-community/gpt2/tree/main?
From https://github.com/yaochenzhu/LLM4Rec/b…
-
# Prerequisites
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [x] I carefully followed the [README.md](https://github.com/abetlen/llama-c…
-
### What happened?
Chat template formatting seems to be swapped for Mistral and Llama 2.
Llama2 supports the `` token for system messages, while Mistral simply uses newlines.
Starting llama ser…
-
### 🚀 The feature, motivation and pitch
I am able to run the training with the FSDP. But then add the "--flop_counter" flag. It gives the following issue. Could someone take a look at this issue? …
-
### Which Cloudflare product(s) does this pertain to?
Wrangler
### What version(s) of the tool(s) are you using?
Wrangler 3.72.2
### What version of Node are you using?
16.15.1
### W…
-
while running inference after **merging lora weights** with the following script
`
!python -m src.serve.cli \
--model-path /kaggle/working/Phi3-Vision-Finetune/output \
--image-file /kaggle/work…
-
### What is the issue?
Hello so i tried to running my finetuned model that based on Llama 3.1 8B instruct model.
It's look like it's giving a random output if you check it below:
I double che…
-
It would be great to see these models work!
> NotImplementedError: Unsloth: /srv/models/Phi-3-medium-4k-instruct not supported yet!
> Make an issue to https://github.com/unslothai/unsloth!
Done…
rwl4 updated
3 months ago