-
### Your current environment
问题
### 🐛 Describe the bug
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
import torch
# Initialize the tokenizer
tokeniz…
-
#### Minimal reproducible example
```
from axtk.generation_utils import RegexLogitsProcessor, TokenHealingLogitsProcessor
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer,…
-
As reported by @ArthurZucker:
> Quick question, I am seeing this in peft: https://github.com/huggingface/peft/blob/f2b6d13f1dbc971c7653aa65e82822ea2d84bb38/src/peft/peft_model.py#L1665 where there …
-
All libraries are installed. following the instructions here : https://huggingface.co/h94/IP-Adapter-FaceID
the error is
```
Traceback (most recent call last):
File "G:\IP-Adapter-FaceID\v…
-
`from awq import AutoAWQForCausalLM
from awq.utils.utils import get_best_device
from transformers import AutoTokenizer, TextStreamer
quant_path = "/workspace/awq_model"
if get_best_device() …
-
Today I was prompted to update my IDE (PyCharm) to the current 2024.1 version and the plugin stopped working. Are there any plans to update it?
![image](https://github.com/alexadhy/tokyonight-jetbr…
-
I was able to run the code exactly
Somehow getting bad quality outputs and are not matching with the repo results
These are some outputs I got for Text-conditional image-to-video generation
Using …
-
# Bug Report
Iam referring to [https://github.com/microsoft/onnxruntime-inference-examples/tree/main/quantization/language_model/llama/smooth_quant](https://github.com/microsoft/onnxruntime-inference…
-
### Willingness to contribute
Yes. I would be willing to contribute this feature with guidance from the MLflow community.
### Proposal Summary
Adding a new feature to MLflow for enhanced prompt man…
-
Greetings :)
Whilst scheduling a stack of jobs, interchanging Checkpoints (using the dropdown) & Loras (inserted into the prompt), the generations become scatterred as they do when too many full s…