-
| Dispatch Type | Shape | Compilation Time [ms] | Execution Time [ms] |
| ------------- | ------------- | ------------- | ------------- |
| matmul | 256x65536x512 | 11968 | 1233 |
| matmul | 128x2…
-
### OpenVINO Version
2023
### Operating System
Windows System
### Device used for inference
CPU
### Framework
None
### Model used
https://huggingface.co/segmind/Segmind-Vega…
-
In the current implementation ([leejet/stable-diffusion.cpp@`4a6e36e`/stable-diffusion.h#L121](https://github.com/leejet/stable-diffusion.cpp/blob/4a6e36edc586779918535e12b4fbe0583044ee6f/stable-diffu…
-
In the VQ-VAE model, you use both n_codes and d_latent. May I ask what is the difference between both?
https://github.com/dvruette/figaro/blob/1c6262308c8d4cf4a7657112af20ae8040d267c0/src/models/va…
-
Thank you for the open source and the guidance.
I encountered some problems when generating the video.
CogVideoX-2b
├── LICENSE
├── model_index.json
├── README.md
├── README_zh.md
├── schedul…
-
### Feature description
Right now VAE selection under 'Execution & Models' settings applies to all models, so while a user may pick 'fixFP16ErrorsSDXLLowerMemoryUse_v10.safetensors' for SDXL models, …
-
Inpaint Anything - ERROR - Could not found the necessary `safetensors` weights in {'unet/diffusion_pytorch_model.safetensors', 'safety_checker/pytorch_model.bin', 'text_encoder/model.safetensors', 'v…
-
When I put q3 t5xxl and clip the ccp says it's f16 and vae f32 these are wrong which causes termux to crash. Flux model is shown correctly if I use flux q2 it shows q2 please fix.
Another issue: th…
-
when using fp16 flux + fp16 t5xxl, generating works just fine but when it reaches the vae decoding stage, it allocates like 10 gb of my ram, overflowing the ram into the ssd and making the pc super un…
-
### Package
I want to add the path to a checkpoint in the extra_model_paths.yaml file in ComfyUI,
but when I modify the file, it doesn't work as expected.
Moreover, the modified data gets reverte…