-
if a "display.update();" is included in the code anywhere, the LoRa side will only receive a packet once, and then never work again.
It was working untill very recently, using previous versions of…
-
### Describe the feature
Hi, when training big model like llama2-70b with lora, it will run into oom due to the unsharded model.
It could help a lot if lora supported with `GeminiPlugin` or `Hybri…
-
I don't know whats is happening, when trying to train it raises the following error, and a few seconds later, says training was concluded (of course, there is no lora trained and it failed).
```
[…
-
I cannot implement my own solution.
Please – I have gone through your entire code – I am learning the basics of C++ – unfortunately, I am making some mistakes. I wanted to implement your code into …
-
I am not able to download the LoRA adabters for the NLU task this week, is there any other place I can find them?
-
Many Loras contain specific trigger words, which can be awkward to keep track of after a while.
This feature would allow simply the option to add text notes to any model under the Models page, and…
-
currently only original LORA is supported as not fused adapter, I hope to be able to add the support for QLORA/QA-LORA support for the adapters, without fusing with the base model.
-
does brushnet support lcm-lora?
follow step in :
https://huggingface.co/docs/diffusers/main/en/using-diffusers/inference_with_lcm_lora
I found the effects is not very good
-
**Describe the bug**
问题1:在使用ppo训练Atom-7Bchat模型时,设置`--lora_target_modules ALL \`报错,若指定名称则不报错,`--lora_target_modules o_proj,up_proj,down_proj,v_proj,k_proj,gate_proj,q_proj \`
![baocuo](https://github…
-
我注意到response里面添加了,但是在input_ids中同样添加了[tokenizer.pad_token_id],这两个是不是添加重复了呢?
def process_func(example):
MAX_LENGTH = 384 # Llama分词器会将一个中文字切分为多个token,因此需要放开一些最大长度,保证数据的完整性
input_ids, atte…