-
Eg https://huggingface.co/laion/CLIP-ViT-H-14-frozen-xlm-roberta-large-laion5B-s13B-b90k/tree/main
Need more config, adapting the weights and also changing the model at https://github.com/huggingfa…
-
(envName) PS C:\Users\XJ768PU\Downloads\llm-graph-builder-main (1)\llm-graph-builder-main\backend> uvicorn score:app --reload
INFO: Will watch for changes in these directories: ['C:\\Users\\XJ768…
-
To run LLaMA 3.1 (or similar large language models) locally, you need specific hardware requirements, especially for storage and other resources. Here's a breakdown of what you typically need:
### …
-
I installed on windows and failing
from torchao.quantization import quantize_
pip freeze
```
Microsoft Windows [Version 10.0.19045.4894]
(c) Microsoft Corporation. All rights reserved.
…
-
I've reinstalled a few times but on first generation, keep running into the same issue.
Downloads keep failing and this screen freezes
![image](https://github.com/user-attachments/assets/3f110bc3-16…
-
@tomaarsen hello tom. I hope you will good.
I am trying to add deepspeed in sentence transformer training argument via deepspeed= "deepspeed_config.json" and also try with accelerate config but it'…
-
Hi! I am using transformers 4.34 and tiktoken 0.4.0. I am trying to download the tokenizer for CodeGen 2.5, but when I run the command in the tutorial
```
>>> from transformers import AutoTokenizer,…
-
### Motivation
Currently, device_map="auto" only support a single-node, multi-GPU setup (https://github.com/huggingface/transformers/issues/24747). If you have access to 8xA100 80GB/40GB, things ar…
-
### Describe the feature request
I'm running a TTS project where the Bert model needs to get the“hidden_states” component of the middle layer output for later processing. I'm using C + + and can't se…
-
已经从源码编译了 transformers
我的推理代码如下
def load(self):
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from transformers.generation import Generat…