-
I'm running LM Studio, and while the interface looks great, I'm noticing it only seems to support very tiny models. My server has two 3090s, so I can run 70-72b sized models, but the largest listed in…
-
On some setup, training with big models fails:
1. node raises `Unexpected error raised by researcher gRPC server in Sender
-
There's already high performance phones that can handle. Big models. I hope i can use also the big models in this app, to have a better speech recognition.
-
### System Info
- `transformers` version: 4.45.0
- Platform: Linux-5.10.227-219.884.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.14
- Huggingface_hub version: 0.26.2
- Safetensors …
-
**Describe the bug**
compartmental constraint has bad layout for bigger models
**Screenshots**
![Image](https://github.com/user-attachments/assets/b1833e14-d6e9-4175-8bdd-bc7873a157c0)
-
Hi, thank you for great work and efforts.
Current kernels seem to support only dimensions of 7B models with hidden dimension 4096.
How can I extend it for larger models like Llama-30B or 65B?
It …
-
Hey,
I am Christoph one of the co-founders of LAION.
We are working on open source Models like gpt4o and a looking for a better Audio Codec than Snac, which has some problems with very expressive…
-
### Expected Behavior
On Windows 11, I have 16GB VRAM NVIDIA card, before I can run full size model of Mochi, i.e. first example in [https://comfyanonymous.github.io/ComfyUI_examples/mochi/](https:…
-
Hey, I just tried the Alpaca Flatpak, it works perfectly fine with small Models.
But whenever I try to download Models bigger than 6GB, the Progression Bar always stops.
Llama 3.1 models, as well as…
-
Hi,
Would ne really awesome if you could add support for the bigger models :)
Thanks
Hyper