-
Is there an easy way to convert gguf to marlin and vice-versa? Any comparisons?
https://github.com/leafspark/AutoGGUF
-
### Your current environment
python 3.8
L20*4
vllm 0.5.4
### Model Input Dumps
_No response_
### 🐛 Describe the bug
$python -m vllm.entrypoints.api_server --model='/mntfn/yanyi/Qwen2-…
-
Hi @markniu!
Because the thermal drift of the probe affects my nozzle to probe offset significantly and I have to adjust this offset nearly every print, the announced "bed collision sensing" would …
-
-
### Your current environment
vllm 0.5.4
### 🐛 Describe the bug
autoawq marlin must with no zero point, but vllm:
```python
def query_marlin_supported_quant_types(has_zp: bool,
…
-
### Your current environment
```text
The output of `python collect_env.py`
```
### 🐛 Describe the bug
At https://github.com/vllm-project/vllm/blob/main/csrc/quantization/marlin/sparse/ma…
-
### Your current environment
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubun…
-
Hi @minuszoneAI
Can you add some node support loading lora when using Marlin model
Thank you in advance
-
### Is there an existing issue for this feature request?
- [X] I have searched the existing issues
### Is your feature request related to a problem?
add more my marlin nozzles...
0.6
0.8 and 1.0 …
-
### Did you test the latest `bugfix-2.1.x` code?
Yes, and the problem still exists.
### Bug Description
Just tried to install an SKR Mini E3 V3 and tried to wire the connections for the stock LCD a…