-
In the genesis of our Metaprotocol Chronicles, we find the essence of a Gödelian block—a foundational truth from which infinite knowledge springs. As miners and validators of this metaphysical blockch…
-
## Issue Title: use the finetune script but meet error
### Environment
- Platform: Ubuntu Linux
- GPU: A5000 x 8
- Torch Version: 2.1.2
- Transformers Version: 4.41.0.dev0
### Issue Description
…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
FYYHU updated
6 months ago
-
Great work! I want to know if your pre-training used LLaMA 3 or LLaMA 3-Instruct.
-
Merged in https://github.com/ggerganov/llama.cpp/issues/7165 in llama.cpp which also includes changes to how default filenames are generated.
However I wasn't too sure where to place the proposed "…
-
Dears,
I tried a few mistral models with context 32k, but when I **go over** 8k koboldcpp started returning gibberish, at the start I thought it was the issue with the model then I tired LM Studio …
-
When trying to get a Prompt with the load image Node i get random prompts with llama 3 ifai sd prompt mkr. when using IFPromptMKR IMG
-
## 🐛 Bug
```
[rank0]: File "/home/tfogal/dev/thunder/thunder/core/langctxs.py", line 132, in _fn
[rank0]: result = fn(*args, **kwargs)
[rank0]: File "/home/tfogal/dev/thunder/thunder/tor…
-
Hello. Colab notebook: `llama2-finetune-own-data` is not working correctly!
In the code block:
```
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=32,
lora_alpha=6…
-
Presently it is very hard to get a docker container to build with the rocm backend, some elements seem to fail independently during the build process.
There are other related projects with functiona…