-
had to change to tcxoVoltage=0 because the example code one just fails to run
Send
````from sx1262 import SX1262
import time
sx = SX1262(spi_bus=1, clk=14, mosi=15, miso=24, cs=13, ir…
nwgat updated
1 month ago
-
`D:\Blender_ComfyUI\ComfyUI\custom_nodes\Lora-Training-in-Comfy/sd-scripts/train_network.py
The following values were not passed to `accelerate launch` and had defaults used instead:
`--nu…
-
I tried to extract a LoRA from `Xwin-LM/Xwin-Math-70B-V1.1` and got this:
```
delta_weight = new_weight - base_weight
~~~~~~~~~~~^~~~~~~~~~~~~
RuntimeError: The size of tensor…
-
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused b…
-
# Problem
My goal, I want to save the merged model as a GGUF file, but I'm getting various errors.
The deeper problem seems to be that merging lora+base model isn't saving a merged file.
I th…
-
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a …
-
#622 fixes issue #609 for loading HF models into a sharded representation. But now when I try to serialize a PEFT model I'm getting a similar error as before:
```
ValueError: Received incompatible…
-
请问一下。分别使用了 qwen2-7B-instruct-AWQ 和qwen2-7B-instruct-GPTQ-int4 两个量化模型进行lora微调,loss 都不收敛。learning-rate 几步之后,就不变了。尝试修改learning-rate、lora-rank 都没有用。
同样的数据,采用qwen2-7B-instruct lora微调能正常收敛。
-
What happen?
--- Terminal on COM6 | 115200 8-N-1
--- Available filters and text transformations: colorize, debug, default, direct, esp32_exception_decoder, hexlify, log2file, nocontrol, printable,…
-
Hi there
What a cool project.
Do you know whether this will work with the LoRA T-Beam boards?
Thanks
Michael GM5AUG