-
### System Info
- CPU architecture x86_64
- Host memory size 32Gb
- GPU Nvidia RTX 2060
- GPU memory size 12 Gb
- TensorRT-LLM v0.10.0
### Who can help?
_No response_
### Information
- [ ] Th…
-
Hi team,
Thanks for open-sourcing your amazing effort.
I have a question: I managed to download the dataset you provided hosted at https://huggingface.co/datasets/OpenGVLab/SA-Med2D-20M, and loo…
-
# Description
We would like to add the integration between TestSpark and the HuggingFace platform to allow the use of more (smaller) models for test generation. Using models should be possible both u…
-
Hi, thank you for your amazing work!.
May I ask how to load a pretrained model directly from **huggingface/diffusers**? I also use the convert script of diffusers to convert from diffusers format to…
-
I dived into it immediately asking questions interactively to participate in the course, but the response was quite astray. I thought, it was focused RAG on this repo and the course.
What should I …
-
### System Info
I have pretrained whisper-large-v2 model with my custom dataset, and tried to build tensorrt-llm.
But I got `[Errno 2] No such file or directory: '/workspace/models/whisper-large-v…
-
Model not found locally, downloading from HuggingFace... Running the COMFYUI, it's taking up VRAM for a long time, pulling COMFYUI up to 50s/it Running the COMFYUI
运行时出现Model not found locally, downl…
-
### System Info
from transformers import AutoTokenizer,AutoModelForCausalLM
all dependencies have the latest versions
### Who can help?
_No response_
### Information
- [ ] The official example s…
-
### 🚀 The feature, motivation and pitch
The configuration files on HuggingFace may have missing information (e.g. #2051) or contain bugs (e.g. #4008). In such cases, it may be necessary to provide/…
-
https://huggingface.co/google/gemma-2-9b-it
https://huggingface.co/google/gemma-2-27b-it
Both use `AutoModelForCausalLM`. They'll probably be on OpenRouter soon. I will post an update when they ar…