TensorStack-AI / OnnxStack

C# Stable Diffusion using ONNX Runtime
Apache License 2.0
220 stars 33 forks source link

Suggestions, new ideas and general talk #3

Closed Amin456789 closed 12 months ago

Amin456789 commented 1 year ago

Hello! thank u for this! i'm a huge fan of onnx as it is very cpu friendly. could u please try and make AnimateDiff to work with onnx too? it will be great to have text 2 gif and image 2 gif with AnimateDiff for onnx cpu in the future

kind regards

saddam213 commented 1 year ago

Hi,

Sorry I have not heard of AnimateDiff, but if you can explain how its supposed to work I can try add it for you :)

Amin456789 commented 1 year ago

thank you for ur fast reply, AnimateDiff is a feature like any other features such as pix2pix or controlnet for stable diffusion, it is a text2video, it uses prompts to make us animated gifs, also it can be used to animate our images as well please search it on github to know more about this amazing feature thank u

saddam213 commented 1 year ago

Had a quick google, looks awesome!!!

That is something I will definately add to this repo if I can, however not sure how long it will take.

My next task is getting the bigger Models to work, StableDiffusion 2.1 and the BaseXL models, as I just brought a 24GB GPU so I can finally dev without waiting on my poor CPU, then I might tackle this AnimateDiff process :)

Thanks for bringing it to my attention :)

Amin456789 commented 1 year ago

u r welcome and thank u so much for considering doing it in the future! a lot of people will love this!

Amin456789 commented 1 year ago

Nice job!

i see how u r active and working on inpainting right now! cant wait for its release, also, there are some other github onnx projects that managed to bring pix2pix to onnx too! maybe u can get help from their code...

txt2img, img2img, inpainting, pix2pix, animatediff in a onnx c# project like urs = the best option out there!

thanks for all ur hard work buddy!

saddam213 commented 1 year ago

Inpainting prototype is kinda working, small edits seem to work, but large edits seem to just grey out

Still has some work to go, but if you want to test it you will need to use the inpainting model https://huggingface.co/runwayml/stable-diffusion-inpainting (this wont work on txt2img or img2img)

If you know of some good repos that would be useful let me know, always good to have more insight

Amin456789 commented 1 year ago

yes there are some good ones that i can tell: OnnxDiffusersUI by azuritecoin on github = it has working inpainting which has a legacy version too, legacy version will work with normal non inpainting models too Stable-Diffuison-ONNX-FP16 by Amblyopius on github = it has lots of goodies that u can get help with such as pix2pix

tonual commented 1 year ago

Let's not limit ourselves. Warp fusion is a true killer. https://www.youtube.com/watch?v=XyYFqjq10nU

saddam213 commented 1 year ago

Pix2Pix and Inpaint legacy should be fairly easy to implement now, I started with the non legacy inpaint as it was the hardest to implement.

Hopefully will have both done by the weekend, then I will tackle the upscaler then move on to controlnet stuff (AnimatedDiff etc. )

All Onnx examples are in python, I have never used python but starting to understand it a bit more and thats making it a bit easier. There is little to no c# Onnx documentation for stable diffusion or anything really which is slowing me down

I want to keep this repo as pure c# and have no bindings to python or c++ libs

Amin456789 commented 1 year ago

that's great to hear, can't wait for new stuff, i know how hard is to programming and codding but please don't give up! having a good c# onnx with lots of features will make ur repo unique! btw, don't forget to make a topic about this in stable diffusion section on reddit! people will love to use it

saddam213 commented 1 year ago

Unfortunately I dont have a reddit or any social media, but feel free to post if you have one

Amin456789 commented 1 year ago

im a lurker in sd on reddit too :D just reading stuff, but it'll get there at some point. im so happy about this repo, im gonna rename this topic to something new for people to talk and give new ideas as we are talking about lots of stuff here

Amin456789 commented 1 year ago

@JohnClaw have a look at this great repo man

Amin456789 commented 1 year ago

@tonual that is right! warp fusion is sick

JohnClaw commented 1 year ago

have a look at this great repo man

Thank you for informing me and many thanks to Saddam213 for creating this tool. I am trying to find SD_ONNX_CLI.exe or SD_ONNX_GUI.exe. However this repo is great. P.S: I tried to make OnnxStack WebUI to draw a cat but it failed to do that. No error was generated in a console window. CPU just stayed IDLE, drawing process didn't started after pressing the "Generate" button.

Amin456789 commented 1 year ago

@bjornlarssen @drago87 you guys should keep an eye on this repo too if u r amd or cpu users

saddam213 commented 1 year ago

Nice job!

i see how u r active and working on inpainting right now! cant wait for its release, also, there are some other github onnx projects that managed to bring pix2pix to onnx too! maybe u can get help from their code...

txt2img, img2img, inpainting, pix2pix, animatediff in a onnx c# project like urs = the best option out there!

thanks for all ur hard work buddy!

Inpaint legacy is working a lot better.

For example I can easily add a tree to an image with a bare minimum prompt demo

Still has some work to go, but the basic idea is up and running

Amin456789 commented 1 year ago

that's great buddy! amazing that u added this feature so fast nice job!

saddam213 commented 1 year ago

that's great buddy! amazing that u added this feature so fast nice job!

It will work with the standard StableDiffusion 1.5 model, no need to use the StableDiffusion-Inpainting model

Will get started on Pix2Pix today :)

Amin456789 commented 1 year ago

great! thanks for all ur efforts!

Amin456789 commented 1 year ago

one request though, could you please update webui for us non programmers to download time to time? lots of us don't know how compile projects to use, it will be great to have updated webui time to time, specially when a new feature is being added, thank you. also, will there be new samplers for inpainting beside ddpm? such as euler and ddim

saddam213 commented 1 year ago

Oh sorry I thought you were compiling as I added new things, my bad

I have published a Debug version of the WebUI using current state of the repo https://github.com/saddam213/OnnxStack/releases/tag/v0.3.2-pre

I will do a proper release this weekend when I update the nuget packages.

From now on I will add WebUI builds like this for non programmers to tests between full releases :)

Amin456789 commented 1 year ago

thank you so so so much!

saddam213 commented 1 year ago

Regarding the Schedulers, I have only converted 3 so far, LMS, Euler Ancestral and DDPM

I will need DIMM for some of the new features, so that will be the next most likely, "

The other schedulers are still not perfect, playing with some of the settings can lead to retarded results, I do need to go back and revisit them all at some point, when I converted them I didn't know any python, but its been a few weeks now so going back and retranslating may be easier now(hope so anyway, math nightmare)

Let me know if you have any issue getting the WebUI demo up and running :)

Amin456789 commented 1 year ago

very understandable. iv heard dpms [dpm solver] gives very good results in very less steps, it could be an option too. btw, our buddy @JohnClaw had a problem, is ur problem solved john?

JohnClaw commented 1 year ago

is ur problem solved john?

No. It remains. Here's a screenshot of OnnxStack WebUI: image As you can see "Generate" button was pressed and became inactive. But no image generation happens. CPU load is 7% whereas it should be 100%. RAM load is 2gb whereas it should be more than 12gb. Here's a screenshot of OnnxStack WebUI console: image No errors are displayed. But no progress of image generation is displayed either.

saddam213 commented 1 year ago

Weird, the web UI is very new and probably still buggy, might be best to wait for the repo to mature a bit more before its ready to use.

Thanks for giving it a shot :)

saddam213 commented 1 year ago

Added a discord channel for OnnxStack to my other projects server for now incase anyone has discord and whats to chat about this project

https://discord.gg/jGGnEmSbnM

Amin456789 commented 1 year ago

Hey guys, hope u have a great day.

a bit off topic but u guys should use koboldcpp, it is very good, the model i am using is synthia 7b v1.3 gguf. koboldcpp is great for anything text generation stuff like chat etc... another thing i am having so much fun is Audiocraft plus, it can generate music for u with musicgen

anyway, the models that i always use are lyriel v1.6 and hasdx for stable diffusion. they are good for everything not to mention lyriel v1.6 can generate 256x256 images very high quality, just put 3d render, anime, cartoon in negative prompt and u r good to go

by the way Adam, in my tests RealSR was the bast upscaler, if u can put in ur onnx it will be great. please see it on github. for some onnx models and help and face restoration see Face-Upscalers-ONNX on github by harisreedhar

JohnClaw commented 1 year ago

a bit off topic but u guys should use koboldcpp

It's good but it requires a web-browser to run. Web-browsers cosume much RAM. I found a new project, Ava pls. It doesn't require a web-browser and it's exe is small. Link: https://www.avapls.com/

JohnClaw commented 1 year ago

lyriel v1.6 can generate 256x256 images very high quality

Have you tried it in SD.cpp by leejet? From which site did you download it? Civitai or HF?

Amin456789 commented 1 year ago

wow thnx for the heads up, ava pls seems so cool, but something i really dig from koboldcpp is the adventure and story mode and put an avatar for the character, does ava pls do the same except chat?

yes i download lyriel from civitai, no i didn't try it on sd.cpp yet, but im almost sure when u make a ggml of it, it will work with 256x256 too. just download the safetensor and convert it, quality is very high. then u can upscale with realsr

Amin456789 commented 1 year ago

also, lyriel could be very good for animatediff when it is released, we can let it to generate 256x256 gifs, then by using flowframes we take out the frames, upscale them and use flowframes again to make a gif again, also flowframes makes the gif much much smoother as it will use ai to fill it with more frames very fast

saddam213 commented 1 year ago

Hey guys, hope u have a great day.

a bit off topic but u guys should use koboldcpp, it is very good, the model i am using is synthia 7b v1.3 gguf. koboldcpp is great for anything text generation stuff like chat etc... another thing i am having so much fun is Audiocraft plus, it can generate music for u with musicgen

anyway, the models that i always use are lyriel v1.6 and hasdx for stable diffusion. they are good for everything not to mention lyriel v1.6 can generate 256x256 images very high quality, just put 3d render, anime, cartoon in negative prompt and u r good to go

by the way Adam, in my tests RealSR was the bast upscaler, if u can put in ur onnx it will be great. please see it on github. for some onnx models and help and face restoration see Face-Upscalers-ONNX on github by harisreedhar

I also have a GGUF based Text Completion library available :)

https://github.com/saddam213/LLamaStack

Amin456789 commented 1 year ago

whoa that is so cool!

saddam213 commented 1 year ago

demo: https://www.llama-stack.com/

Amin456789 commented 1 year ago

just checked it, it is very good, u should make a cute gui for it for local usage someday

saddam213 commented 1 year ago

There is already a local GUI available https://github.com/saddam213/LLamaStack/tree/master/LLamaStack.WPF#readme

WPF Application link https://github.com/saddam213/LLamaStack/releases

Similar to OnnxStack, just download a GGUF model, and set it in the apps appsettings.json

Amin456789 commented 1 year ago

thanks buddy, is there a dark mode for it? also, is it webui? and for the final question, does it have a memory? for example in koboldcpp we can give the ai a memory for roleplaying... ur gui seems very lightweight, i like it!

saddam213 commented 1 year ago

No dark mode yet,

Its a WPF UI not web, but there is also a local WebUI available and a WebAPI

It has does have state saving abilities, and allows multiple contexts on a single model

I doubt its as advanced as koboldcpp as I am new to AI and OnnxStack and LLamaStack are my first AI projects

Amin456789 commented 1 year ago

thanks mate! well u have a very good start, great ai future ahead for u!

saddam213 commented 1 year ago

Version 0.4.0 https://github.com/saddam213/OnnxStack/releases/tag/v0.4.0

Amin456789 commented 1 year ago

great, thank u!

Amin456789 commented 1 year ago

@ClashSAN hey buddy, i saw u almost everywhere when it comes to optimizing stable diffusion cpp/onnx stuff :D, i though let u know to keep an eye on this repo, also any tips are welcome for helping our buddy Adam here

Amin456789 commented 1 year ago

anyway guys, in latest onnxruntime 1.16, in its changelog it says: 4-bit quantization support for cpu, i assume it is int4 right? i used to use quantized hasdx int8 which was only 1gb model without safety checker, the quality was quite good. now i was thinking maybe onnxstack should support int4 too as it will be 500mb model or something [i assume it does as it is the latest onnxrunetime], it will be a huge ram saver for us cpu users. the only problem with int8 was some samplers didn't work. but ddim euler euler a dpmsolver were working very good, but in inpainting only ddim had good results

quantized models are quite good, for example i am using gguf q3 of synthia 7b for text generation, there are reports on sd.cpp that q5 are really good for stable diffusion

JohnClaw commented 1 year ago

Version 0.4.0 https://github.com/saddam213/OnnxStack/releases/tag/v0.4.0

Still can't make it work. I suppose that i do something wrong due to lack of technical knowledge. Btw, i made some changes to appsettings.json that may make OnnxStack not working as it should. I replaced default paths by custom. For example, by default one of the paths inside appsettings.json is: "D:\Repositories\stable-diffusion-v1-5\unet\model.onnx" I changed D:\ to C:\ because my ssd hasn't D:\ partition. I also moved file called cliptokenizer.onnx from OnnxStack WebUI folder to C:\Repositories\stable-diffusion-v1-5\. I did it because there wasn't such file in C:\Repositories\stable-diffusion-v1-5\ And by default contents of appsetings.json includes a path that looks like D:\Repositories\stable-diffusion-v1-5\cliptokenizer.onnx Why OnnxStack WebUI tries to find this file in stable-diffusion-v1-5 folder whereas it already exists inside it's own folder? Why OnnxStack WebUI can't automatically detect where is stable-diffusion-v1-5 folder? Should i change DirectML parameter in appsetting.json to CPU? I guess we need an instruction how to properly install and configure OnnxStack WebUI and appsettings.json. And, please, make a GUI version of OnnxStack. WebUI runs in web-browser and browsers consume a lot of RAM. I have only 15,4 gb RAM, Windows 11 consumes at least 2gb, OnnxStack needs 12gb RAM and browser consumes the rest 1,4 gb. So even if i manage to run OnnxStack WebUI after properly configuring appsettings.json, i will encounter a lack of RAM.

saddam213 commented 1 year ago

Yes, you will need to change the paths in the appsettings.json to where your model files are

Make sure the paths have 2 slashes \\ between paths (a json thing)

If you dont have a GPU then yes set "ExecutionProvider": "DirectML" to "ExecutionProvider": "Cpu"

JohnClaw commented 1 year ago

Yes, you will need to change the paths in the appsettings.json to where your model files are

Make sure the paths have 2 slashes \\ between paths (a json thing)

If you dont have a GPU then yes set "ExecutionProvider": "DirectML" to "ExecutionProvider": "Cpu"

Thank you for the answer. I already understood both things yesterday and edited appsettings.json accordingly but it didn't help: OnnxStack WebUI still doesn't generate images. Here's a screenshot that displays a part of appsettings.json: image Should i somehow change DeviceId, InterOpNumThreads, IntraOpNumThreads and ExecutionMode? And what about cliptokenizer.onnx ? Was i doing it right when i moved this file from the main folder of OnnxStack WebUI to C:\Repositories\stable-diffusion-v1-5\ ?

Amin456789 commented 1 year ago

in my opinion a gui is much better as the whole resource goes to sd, a cute gui and for solving all these there should be a model folder within the app's folder that the models goes in and it doesn't need json, like many other gui that when we run them they just look at the folder. same goes for cpu, an option in the gui will be great that automatically rename it to gpu or cpu in json or something i still didn't test this webui as i still waiting for new stuff to to test, so i can't comment, but did u installed .net 7 on ur system john? maybe that is the problem, or maybe u have to type CPU with capslock, in some other gui i had to rename it manually change the dml to cpu and it was important to type with capslock CPU

Amin456789 commented 1 year ago

btw john, leejet is still not there to update sd.cpp, i really needed a batch count, he didn't update for 1 month, do u got a way for sd.cpp that it can generate images in a row one after the another? like a bat command or something,i tried to write a bat that could wait for sd.exe to close in task manager so it can run my generate bat again but i failed