-
With PR https://github.com/NVIDIA/NeMo-Curator/pull/58, we have cleaned up the model initialization quite a bit. We still need to make the classifiers work directly with HuggingFace without downloadi…
-
I can use llava by these codes:⬇️, how can I transfer it to use llava-med weights easily
```
from PIL import Image
import requests
from transformers import AutoProcessor, LlavaForConditionalGenera…
-
Starting SD3 medium low VRAM...
Python command check :OK
Python version: 3.12.4
C:\SD\sd3-low-vram\env\Lib\site-packages\diffusers\models\transformers\transformer_2d.py:34: FutureWarning: `Transfor…
-
xinference用docker安装时可以用-v /.cache/huggingface:/root/.cache/huggingface改变huggingface模型的默认位置,但是用pip安装后设置 HF_HOME不管用,还是在XINFERENCE_HOM生成huggingface目录,模型下载到里面,如何将huggingface模型目录设置到指定位置?
-
**Is your feature request related to a problem? Please describe.**
**Describe the solution you'd like**
finalize the design for schema of middle layer that transform outer MR logical model to [KFM…
-
Hi,
I'm new to Langchain and LLM.
I've recently deployed an LLM model using the Hugging Face text-generation-inference library on my local machine.
I've successfully accessed the model using …
-
https://huggingface.co/Alpha-VLLM/Lumina-T2Audio not available to clone?
-
Currently, there are two problems:
- `eval.batch_size` is used for spinning up multiple environments (see [code](https://github.com/huggingface/lerobot/blob/main/lerobot/common/envs/factory.py#L51)…
-
**Describe the bug**
HuggingFace Embedding Interface API: Issue related to Deserialization
So I was using [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) embedding model from huggingf…
-
Updated 2024-07-01.
Datasets:
- Used for evaluation:
- MMLU: https://huggingface.co/datasets/hails/mmlu_no_train
- ARC-Challenge: https://huggingface.co/datasets/allenai/ai2_arc
- HellaSwag: h…