-
- [ ] [LoRA Land: Fine-Tuned Open-Source LLMs that Outperform GPT-4 - Predibase - Predibase](https://predibase.com/blog/lora-land-fine-tuned-open-source-llms-that-outperform-gpt-4)
# LoRA Land: Fine…
-
- [ ] [blog/starcoder2.md at main · huggingface/blog](https://github.com/huggingface/blog/blob/main/starcoder2.md?plain=1)
# blog/starcoder2.md at main · huggingface/blog
---
## StarCoder…
-
In this issue you can either:
- **Add papers** that you think are interesting to read and discuss (please stick to the format).
- **vote**: should be done using :+1: on comments
-
and daily notation.
-
Subscribe to this issue and stay notified about new [daily trending repos in Jupyter Notebook](https://github.com/trending/jupyter-notebook?since=daily).
-
[The format of the issue]
Paper name/title:
Paper link:
Code link:
amusi updated
2 months ago
-
I want to change the tokenizer so that it can be applied to Korean
I would appreciate it if you could change LLM_PATH and additionally let me know which part of the code should be modified.
-
I just follw the step, but when I run the following code :
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("Efficient-Large-Model/Llama-3-VILA1.5-8B")
…
-
https://virtual2023.aclweb.org/paper_P2946.html
-
This is the issue dedicated to the summary of papers that I found related to adding back translation expander.