-
# KoAlpaca 에 대해 공부 - 2 | HK Playground
KoAlpaca 한국어 LLM의 시작점!
[https://zayunsna.github.io/blog/2023-08-08-koalpaca2/](https://zayunsna.github.io/blog/2023-08-08-koalpaca2/)
-
CUDA必须大于11.6吗?
我的环境是CUDA11.2,微调的时候报错:ImportError: cannot import name 'skip_init' from 'torch.nn.utils'
skip_init函数是不是只有在torch 2.0上才能用?
哪位大佬帮忙给解答一下?十分感谢!!!
-
-
### 📚 The doc issue
```python
from lmdeploy.messages import PytorchEngineConfig
from lmdeploy.pytorch.engine.engine import Engine
adapters = {'adapter0':'/root/.cache/huggingface/hub/models--tloen…
-
(MagicQuill) D:\MagicQuill>pip install gradio_magicquill-0.0.1-py3-none-any.whl
Processing d:\magicquill\gradio_magicquill-0.0.1-py3-none-any.whl
Requirement already satisfied: gradio=4.0 in c:\user…
-
Hey where to find models foor this?
RYG81 updated
7 months ago
-
Thanks for your work!
I try to reproduce the GSM8K results within this project. I simply removed the code of tranferring models to peft to achieve this. However, I can't reproduce the results on LLaM…
-
So I'm attempting to run the DPO LoRA script and I'm getting this error:
```
RuntimeError: The size of tensor a (0) must match the size of tensor b (4096) at non-singleton dimension 1
```
... wh…
-
model = AutoModelForCausalLM.from_pretrained(
args.model_name_or_path,
device_map=device_map,
load_in_4bit=True,
torch_dtype=torch.float16,
trust_r…
-
### System Info
File "/home/mukuro/projects/LLaMA-Factory/src/llamafactory/model/adapter.py", line 299, in init_adapter
model = _setup_lora_tuning(
^^^^^^^^^^^^^^^^^^^
File "/hom…