TensorStack-AI / OnnxStack

C# Stable Diffusion using ONNX Runtime
Apache License 2.0
221 stars 33 forks source link

Support of fp16 of LCM dreamshaper #9

Closed Amin456789 closed 1 year ago

Amin456789 commented 1 year ago

Hi!

i just downloaded this fp16 model from here: https://huggingface.co/aislamov/lcm-dreamshaper-v7-onnx/tree/main

it loads very fast and good but when i push on generate it stops immediately, i mean the model stays loaded but it wont generate anything, could u take a look at it @saddam213 @dakenf please? i am using cpu, so i don't know if its a gpu optimizd model or not

jdluzen commented 1 year ago

Looks like it doesn't like the unet's timestep. The fp16's is a float, the original is a long.

jdluzen commented 1 year ago

Yeah that was it: LatentConsistencyDiffuser.cs:198.

saddam213 commented 1 year ago

That should be easy enough to support, let me see if I can squeeze it into tomorrows release

jdluzen commented 1 year ago

@saddam213 I've been trying to get a PR going, but I don't have access to the IOnnxModel in DiffuseAsync for _onnxModelService.GetInputMetadata. Is that available and I'm just not seeing it? Or will I have to edit OnnxModelService?

saddam213 commented 1 year ago

Sorry I missed your PR and already commited a fix 38f60b6

GetInputMetadata is accessible and worked perfect, our implementations were pretty much the same

Thanks for the PR

saddam213 commented 1 year ago

Latest commit will fix immediate issue for both pipelines, added the functionality to both diffuser base classes but I think implementation should be moved to a shared place as new pipelines will also need this I would assume.

Perhaps we need a static helper class for methods like these, as DecodeLatents is the same across both as well

Amin456789 commented 1 year ago

uh nice, thanks guys! cant wait for the update to test it out

dakenf commented 1 year ago

Hello everyone! If you don't mind, i'll give you some tips on model conversion based on this doc

Long story short, if you run fusion optimizer on the model, it will combine many ops into one. So from 3k+ ops it will get to 1k+. That will lead to VRAM/RAM decrease (less GPU buffers allocated for each node input/output) and performance optimizations, since CUDA and DML have fused attention kernels

I've been using this script, it already has optimized settings for DML https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py but with some changes.

image

Last 4 lines (disable BiasAdd, BiasSplitGelu, packed KV and QKV) are required if you want the model to work on CPU. Bias* kernels are not implemented for CPU in ONNX and packed KV/QKV for MultiHeadAttention are not supported on CPU too

With these optimizations and fp16 you should be able to run unet with less than 5gb VRAM. You can check results with this model i've converted for WebGPU https://huggingface.co/aislamov/stable-diffusion-2-1-base-onnx/tree/main

But if you want maximum performance, you can create two revisions of the model on huggingface. One with max GPU optimizations and another for CPU

Feel free to ask me any questions if you have!

Amin456789 commented 1 year ago

Hello everyone! If you don't mind, i'll give you some tips on model conversion based on this doc

Long story short, if you run fusion optimizer on the model, it will combine many ops into one. So from 3k+ ops it will get to 1k+. That will lead to VRAM/RAM decrease (less GPU buffers allocated for each node input/output) and performance optimizations, since CUDA and DML have fused attention kernels

I've been using this script, it already has optimized settings for DML https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py but with some changes. image Last 4 lines (disable BiasAdd, BiasSplitGelu, packed KV and QKV) are required if you want the model to work on CPU. Bias* kernels are not implemented for CPU in ONNX and packed KV/QKV for MultiHeadAttention are not supported on CPU too

With these optimizations and fp16 you should be able to run unet with less than 5gb VRAM. You can check results with this model i've converted for WebGPU https://huggingface.co/aislamov/stable-diffusion-2-1-base-onnx/tree/main

Feel free to ask me any questions if you have!

hi! thank u so much for sharing this, sadly i have no idea how to code to do it myself, could u please make some fp16 models for cpu too? lyriel v16, deliberate v2 or v3, epiCRealism are a few good ones, any of them is good, i would like to use and test them out in onnxstack if possible, thanks https://huggingface.co/nyxia/lyriel16/tree/main or https://civitai.com/models/22922/lyriel https://civitai.com/models/25694/epicrealism https://huggingface.co/stablediffusionapi/deliberate-v3/tree/main

Also, i assume this lcm model is only for gpu only? could u please make a cpu optimized too? but i will test tomorrow for cpu this one either way to see how it goes!

Amin456789 commented 1 year ago

LCM fp16 now works very good and it is so fast! but i have no idea what is going on as i used directml and set the device to 0 for unet and the rest on 1 so i think it uses my AMD and Intel gpus [in task manager my intel graphic goes 99% usage so it is mostly this gpu] not the cpu this time,

i close it this topic if it is ok now