-
Will longlora finetune supported in the future?
-
I'm trying to fine-tune BGE-M3 based on the README here: https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune
I originally started with the latest transformers version a few week…
-
# KoAlpaca 에 대해 공부 - 2 | HK Playground
KoAlpaca 한국어 LLM의 시작점!
[https://zayunsna.github.io/blog/2023-08-08-koalpaca2/](https://zayunsna.github.io/blog/2023-08-08-koalpaca2/)
-
`(ht240815) PS G:\project\ht\240815\LongWriter> python .\trans_web_demo.py
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████…
-
In [finetune.py](https://github.com/tloen/alpaca-lora/blob/main/finetune.py) there is the following section to support resuming from a checkpoint, but you may note that `resume_from_checkpoint` is set…
-
## Typology of Efficient Training
- Data & Model Parallel
- Data Parallel
- Tensor Parallel
- Pipeline Paralle
- Zero Redundancy Optimizer(ZeRO) (DeepSpeed, often work with CPU offloadi…
-
CUDA必须大于11.6吗?
我的环境是CUDA11.2,微调的时候报错:ImportError: cannot import name 'skip_init' from 'torch.nn.utils'
skip_init函数是不是只有在torch 2.0上才能用?
哪位大佬帮忙给解答一下?十分感谢!!!
-
# Study notes on parameter-efficient finetuning techniques
Traditional finetuning involves training the parameters of a large language model with a shallower domain-specific network. However, this ap…
-
### 📚 The doc issue
```python
from lmdeploy.messages import PytorchEngineConfig
from lmdeploy.pytorch.engine.engine import Engine
adapters = {'adapter0':'/root/.cache/huggingface/hub/models--tloen…
-
### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.11.0
- Huggingface_hub version: 0.20.1
- Sa…