-
Thanks for publishing this customized version of vllm.
According to the readme.md, I tried to install it and found some problems.
The error message is as follows:
```
Building wheels for collecte…
-
Hello,
Lora gateway, network server , node-red and MQTT broker are all on the same raspberry-pi based Lora Gateway.
the version of the network server is : 0.5.5
testing MQTT pub-broker-sub on…
-
!swift sft \
--model_type got-ocr2 \
--model_id_or_path stepfun-ai/GOT-OCR2_0 \
--sft_type lora \
--dataset /kaggle/working/output_data.json \
--output_dir /kaggle/working/hindi_got_model_3 \
--…
-
I run this:
```python -m slora.server.api_server --max_total_token_num 64 --model meta-llama/Llama-2-13b-hf --tokenizer_mode auto --lora /playground/slora-playground/adapters-1/ --lora /playground/s…
-
### Expected Behavior
-
### Actual Behavior
![image](https://github.com/user-attachments/assets/1f9608dc-4631-41c3-bd2a-bfe506d39104)
SD15 and Flux work fine, the problem is only with SDXL
Co…
-
I have a Mistral7B model with fine-tuned LoRa weights with datatype bfloat16.
I ran into issues when attempting to use my adaptors which were compiled for bfloat16
Running the following command …
-
您好,按照requirements.txt安装好依赖环境后,从huggingface.co下载好「chatglm-6b」和「lora」权重,运行『chat_server.py』后显示 hidden-size mismatch错误,([8192, 8, 1]) vs ([12288, 8]).
-
### What is the issue?
I downloaded the codegemma and codellama models from Huggingface and fine tuned them using llama factory. After importing the fine tuned model into Ollama, Codellama works norm…
-
Whan add the `Lora Loader` node, the console will hint `substring not found` error.
Browser: Firefox 130.0b9 on Windows
```
Traceback (most recent call last):
File ".miniforge3/envs/comfy/li…
-
I need to change the variable that I received by LoRa for to send to ThingsBoard, but there is a error. "invalid conversion from 'int' to 'const char*' [-fpermissive]"
Thank you!
```#include
#…