-
Thanks for your great job, question about finetune lora, I want to know what are the minimum server resources (GPU memory and system memory) required for fine-tuning a LoRa model?
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
If we lose connection to the server while LORA training, and we reconnect, we dont get any info on the GUI but it seems like it is still training the LORA. is this the case? Can we still save our lora…
-
WIP project roadmap for LoRAX. We'll continue to update this over time.
# v0.10
- [ ] Speculative decoding adapters
- [ ] AQLM
# v0.11
- [ ] Prefix caching
- [ ] BERT support
- [ ] Embe…
-
## Summary:
After successfully building and flashing the firmware on my lora32_v21 boards, rnsd reports that the TX power and bandwidth reported by the board does not match, and thus the RNode setu…
-
Hi, Thanks for your wonderful work.
I am struggling using my lora tuned model.
I conducted following steps
1. finetuning with lora
- Undi95/Meta-Llama-3-8B-Instruct-hf model base
- llama3 …
-
### Issue Description
I've been having this issue since yesterday, at least. Image generation hangs after hitting Generate, and finally 10-ish minutes later this error comes up.
### Version Platform…
-
### Expected Behavior
Lora should load with minimal vram overhead (considering it is a small lora; the 4/4 rank one is 40mb).
### Actual Behavior
Large vram usage increase when loading certain lora…
-
hello!
is it possible to build 1ch GW on RPI4 Ubuntu 24.04 + Waveshare SX1262 LoRaWAN HAT and use it with Chirpstack Network Server for working with nodes based on Heltec Cubecell and RFM95 based n…
-
### System Info
`
text-generation-launcher 2.1.0
`
### Information
- [X] Docker
- [X] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reprod…