-
### Your current environment
```text
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC ve…
-
I'm working on sending data from an STM32WB55 to Raspberry Pi's. I previously had it working Pi 3B's. Now on a Pi 4B, after over 3 times 1024 frames with each frame having 17 data bytes and a CRC, I g…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
The batch file starts the services, loads extensions, doe…
-
rtl-wmbus supports mode T1, mode C1 and mode S1 but not mode N as mentioned here https://github.com/weetmuts/wmbusmeters/issues/321. With the Support of mode N wmbusmeters project could benefit from i…
-
### First, confirm
- [X] I have read the [instruction](https://github.com/Gourieff/sd-webui-reactor/blob/main/README.md) carefully
- [X] I have searched the existing issues
- [X] I have updated t…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue y…
-
Does this single channel gateway integrate with Chirpstack?
if Yes, please suggest which documentation to refer to.
if No, please suggest if there is any alternate way to integrate it.
-
### System Info
Environment:
2 NVIDIA A100 with nvlink
Tensorrt-LLM Backend version v0.8.0
LLAMA2 engine built with paged_kv_cache and tp_size 2, world size 2
X86_64 arch
### Who can hel…
-
### Motivation
By using multiple LoRA adapters, we can expect to achieve various behaviors within a single inference server. This can potentially reduce the number of servers needed to deploy inferen…
-
Lora training FLUX.schnell, getting this error:
```
Running 1 process
Loading Flux model
Loading transformer
Fusing in LoRA
Error running job: PEFT backend is required for this method.
===…