-
我在peft里面似乎没有找到快速svd的代码
这是我自己用的代码,不知道是否和论文里的一致
```python
V, S, Uh = randomized_svd(self.weight.data.numpy(), n_components=self.lora_rank, random_state=None)
Vr, Sr, Ur = map…
-
I just read the peft fine tune script.
The `quantization_config` is not use anywhere.
-
Hello, unlock-hf 伟大的贡献者们
我是一位 DataWhaler,我最近在学习 peft,发现 peft 框架中和 hf 的 TRF、Accelerate 实现深度集成。
我认为伟大的 unlock-hf 项目可以增加相关的介绍以及 demo,帮助更多的开发者理解和应用 PEFT 框架。如果我以后有空余时间的话也会参与贡献!
希望这个建议能够帮助仓库的内容更加完善!🤗🤗🤗
-
Hi, i meet a problem when load the model using the "tevatron/retriever/driver/encode.py". "lora_name_or_path" is trained with "tevatron/retriever/driver/train.py". I am confused by this problem.
Trac…
-
I have done that with peft in the past with good and not so good results ;)
-
哈喽,当前拉的最新peft 没有0.4.0dev
![image](https://github.com/EvilPsyCHo/train_custom_LLM/assets/81786651/bc9394ca-08c9-45b4-930a-e2846feaa0fb)
-
### System Info
Python 3.11.9
transformers==4.40.2
peft==0.11.2
### Who can help?
@BenjaminBossan
I'm interested in using [Inference with different LoRA adapters in the same batch
](https://hu…
-
I have followed the Sample Colab with my custom dataset ( < 100 samples ). With the same Configs as in the Sample Colab(loading the model in 4 bit and dtype as None and other configs like Peft and Tra…
-
### Describe the bug
When I load serverl loras with set_lora_device(), the GPU memory continues to grow, cames from 20G to 25G, this function doesn't work
### Reproduction
for key in lora_list:
…
-
Hi, I'm trying to run the [script for PEFT](https://github.com/AutoGPTQ/AutoGPTQ/blob/main/examples/peft/peft_lora_clm_instruction_tuning.py) available in the examples folder of autogptq but I get the…
RMimo updated
4 months ago