-
I am running SwarmUI and ComfyUI on the same node, a windows host. I already have models, loras and all the things downloaded and in use by ComfyUI, so in Server -> Server Configuration i followed the…
-
### Your current environment
```
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC versio…
ebi64 updated
1 month ago
-
### Describe the bug
Hi guys, i again.
I'm experiencing a new bug. When I add a component dynamically and change its value everything is fine. So if I remove the component and then add it again, o…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/axolotl-ai-cloud/axolotl/labels/bug) didn't find any similar reports.
###…
-
I've trained the `unsloth/Llama-3.2-3B-Instruct-bnb-4bit` model successfully, but when I try to use it with `astLanguageModel.from_pretrained`, I get this error:
```
Traceback (most recent call la…
-
Llama.cpp has updated the code `if (arg == "--lora") {
CHECK_ARG
params.lora_adapter.emplace_back(argv[i], 1.0f);
return true;
}` in common/common.cpp to `if (arg == "-…
-
**Describe the bug**
It's frustrating to run i.e. `ilab -v data generate` only to get:
```
...
DEBUG 2024-09-12 13:23:22,053 instructlab.model.backends.vllm:205: vLLM serving command is: ['/opt/…
-
This only matters when the same sd_ctx is used for multiple prompts - Loras that have been applied in a previous prompt but don't appear in the current prompt are not unapplied.
Steps to reproduce:…
-
Hello Team,
The attached codec was tested numerous times and worked fine.
However, the LoRa server only reads the data once, and then creates a duplicate error (see attached Duplicate Error – 1)…
-
run codes below
`python -m vllm.entrypoints.api_server \
--model meta-llama/Llama-2-7b-hf \
--enable-lora \
--lora-modules sql-lora=~/.cache/huggingface/hub/models--yard1--llama-2-7b-…