-
Starting SD3 medium low VRAM...
Python command check :OK
Python version: 3.12.4
C:\SD\sd3-low-vram\env\Lib\site-packages\diffusers\models\transformers\transformer_2d.py:34: FutureWarning: `Transfor…
-
### Model description
hey there! Was looking to use nomic-ai/nomic-embed-vision-v1.5 since I'm using the text version so I could support image / text queries using the same semantic space, but gettin…
-
### System Info
Using
- `transformers` version: 4.41.2
- Platform: Windows-10-10.0.17763-SP0
- Python version: 3.12.3
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.3
- Accelera…
-
Since OpenSearch 2.13, [**fixed token length algorithm**](https://opensearch.org/docs/latest/ingest-pipelines/processors/text-chunking/#fixed-token-length-algorithm) is available in text chunking proc…
-
So following the recommended install method of cloning the client repository and running ./play-rocm.sh, the install goes smoothly then as soon as it starts to load kobold itself it immediately crashe…
-
When going with the example notebook:
```python
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps xformers "trl
-
**Description**
HuggingFace's Quanto has implemented 4 bit & 2 bit KV cache quantization compatible with Transformers. See: https://huggingface.co/blog/kv-cache-quantization
I may PR when I've t…
-
```
orangepi@orangepi5:~/RK3588-stable-diffusion-GPU$ python ./convert_model_from_pth_safetensors.py --checkpoint_path ./1.safetensors --dump_path ./1/ --from_safetensors --original_config_file ./v1-…
-
When I try to load on multiple GPUS, I get the following error:
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-base')
model = AutoModelForSequenceClassification.from_pretrained('BAAI…
-
**Describe the bug**
When running run.sh to install and run, huggingface-hub is installed and overwrites whatever version I currently have in site-packages for Python 3.10 with huggingface-hub versio…