-
When you trained chronos from t5 initialized with random or language weights, did you finetune `'encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight'` or while checkpointing, …
-
I tried using this, and I failed on knowing what to download and how to configure the program.
As described in the README, I used `Unet Loader (GGUF)` to load https://huggingface.co/city96/FLUX.1-s…
-
I [found ](https://gist.github.com/city96/30743dfdfe129b331b5676a79c3a8a39)the 'Force/Set Clip Device' node by search but it's unclear how you actually are getting sub 10GiB vram usage. Using what i b…
a1270 updated
4 weeks ago
-
Hi @NThakur20 I was wondering if we can train a T5 model as when I was loading a T5 model from HF there seems to be an error.
-
I followed the installation instructions in the README and am running cuda 11.8. However, when I load the model, I get "segmentation fault (core dumped)". This happens whether I specify the device as …
-
when I was trying to test the Q8 gguf model of flux.1 dev, for some reason it always kills the instance no matter what I do
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded .…
-
There is a new promising model.
https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux
It would be good to take this model pipeline from diffusers package and add a canvas and attention …
-
```
Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0]
Version: f2.0.1v1.10.1-previous-260-gaadc0f04
Commit hash: aadc0f04c48eb19475752a4206420ea2004e2f42
Launching Web UI with arguments: --…
-
Does this support Flan-T5 model?
Thanks
-
### Feature request
Recently, we have added the ability to load `gguf` files within [transformers](https://huggingface.co/docs/hub/en/gguf).
The goal was to offer the possibility to users …