-
### Feature request
I would like to ask if there is a way to perform iterative generation (n times) within the pipeline, specifically for models like LLMs. If this feature is not available, is there …
-
features = self.dino_block.forward_features(x.to("cuda"))['x_norm_patchtokens']
File "/root/.cache/torch/hub/facebookresearch_dinov2_main/dinov2/models/vision_transformer.py", line 258, in forward_…
-
I trying quantize [lightblue/qarasu-14B-chat-plus-unleashed](https://huggingface.co/lightblue/qarasu-14B-chat-plus-unleashed) based [Qwen/Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) .
…
-
Can you please export this jupiter notebook thing of Llama-3-PyTorch .ipynb to PURE PYTHON as Llama-3-PyTorch_model.py and Llama-3-PyTorch_tokenizer.py
Because I want to try to adapt this to work w…
-
(marconet) C:\Users\L\Pictures\MARCONet>python test_sr.py -i "C:\Users\L\Downloads\bsrgan\inputs" --real_ocr
################################################################
Input …
-
i come to this problem while loading this model:
model, vis_processors, text_processors = load_model_and_preprocess("blip2_image_text_matching", "pretrain", device=device, is_eval=True)…
-
Failed to import transformers.models.mllama.processing_mllama because of the following error (look up to see its traceback):
No module named 'transformers.models.mllama.processing_mllama'
Somehow …
-
Dear quanto folks,
I implemented quantization as suggested in your coding example [quantize_sst2_model.py](https://github.com/huggingface/optimum-quanto/blob/main/examples/nlp/text-classification/s…
-
2024-04-30 02:51:59,540- root:1882- WARNING- Traceback (most recent call last):
File "/data/zr/ComfyUI/nodes.py", line 1864, in load_custom_node
module_spec.loader.exec_module(module)
File …
-
`jetmoe-8b` model runs fine but for `jetmoe-8b-chat` with even the latest `transformers` and `tokenizer` I get:
```
Traceback (most recent call last):
File "/home/cqrl/.local/lib/python3.11/si…
Sukii updated
7 months ago