ZTMIDGO / Android-Stable-diffusion-ONNX

使用Android手机的CPU推理stable diffusion
127 stars 26 forks source link

Feature request #10

Closed Fcucgvhhhvjv closed 11 months ago

Fcucgvhhhvjv commented 11 months ago

hi I have converted 6-7 models , actually I created a colab notebooks which does it automatically . Most of the model works and give some what good result but dreamlike photo real 2 doesn't work . Also how can we load models from storage rather than building it in the app itself? The feature to use different sampler and save images . Let me know if u want the models . If you can guide me a bit I'll try to modifying the app file so loading from external storage works .

ZTMIDGO commented 11 months ago

If the model cannot run, you can use netron to view the input shape of the model. If it is consistent with what I use, it can usually run, but other situations cannot be ruled out. I'll be modifying the project this week to allow external selection of models

Fcucgvhhhvjv commented 11 months ago

I have checked with both attention slicing auto and max while converting the model but it doesn't work , model card says it was trained on 768 x 768 images and I'm running 256 or 320 maybe that's why?

Also what about the scheduler? Maybe using a scheduler like dpm will make some drastic changes. Im looking forward to it.

I have weak understanding of Java and Android app so I can't do much , I'm learning .

ZTMIDGO commented 11 months ago

Perhaps the model only supports input shapes with a width ratio of 1:1

Fcucgvhhhvjv commented 11 months ago

Can u check it when ur free ill give u the link for the model

hi , i was somehow able to generate 448x448 and 512 x512 , my device only has 6gb ram

this was made with majicmix model https://huggingface.co/Androidonnxfork/test/blob/main/Screenshot_2023-07-31-18-43-14-665_com.example.open.diffusion.jpg https://huggingface.co/Androidonnxfork/test/blob/main/Screenshot_2023-07-31-18-36-40-178_com.example.open.diffusion.jpg

ZTMIDGO commented 11 months ago

你可以在有空的时候检查一下吗我会给你模型的链接

你好,我以某种方式能够生成 448x448 和 512x512,我的设备只有 6GB RAM

这是用 majicmix 模型制作的 https://huggingface.co/Androidonnxfork/test/blob/main/Screenshot_2023-07-31-18-43-14-665_com.example.open.diffusion.jpg https://huggingface.co /Androidonnxfork/test/blob/main/Screenshot_2023-07-31-18-36-40-178_com.example.open.diffusion.jpg

how do you do it Usually there will be a memory overflow。

Fcucgvhhhvjv commented 11 months ago

I dont know myself maybe its the model? i recent increased my ram extension so i have extra 5gb maybe thats why? ill post more pictures let me create first. btw do u have discord or something else so we can chat in there , maybe lets have a discord server so it will give boost to this project .

Fcucgvhhhvjv commented 11 months ago

https://huggingface.co/Androidonnxfork/test/blob/main/Screenshot_2023-07-31-19-25-56-586_com.example.open.diffusion.jpg https://huggingface.co/Androidonnxfork/test/blob/main/Screenshot_2023-07-31-19-29-29-325_com.example.open.diffusion.jpg https://huggingface.co/Androidonnxfork/test/blob/main/Screenshot_2023-07-31-19-44-21-629_com.example.open.diffusion.jpg here , i can confirm it works for 512 generation .

Fcucgvhhhvjv commented 11 months ago

maybe u wanna see my colab notebook to see it ? can u tell me which model was the original it crashed with 512 ill try to convert and check if it it crashes at 512

ZTMIDGO commented 11 months ago

Nice, but might run slower. I can't use chat software like discord, because the Chinese government blocks all foreign chat software

ZTMIDGO commented 11 months ago

也许你想看看我的 colab 笔记本来看看它?你能告诉我哪个型号是原来的吗?它崩溃了 512 我尝试转换并检查它是否崩溃于 512

You can try this: https://huggingface.co/runwayml/stable-diffusion-v1-5

Fcucgvhhhvjv commented 11 months ago

yeah it is inneed slower , 20 steps take like 3-5 min on 512 x 512 . can gpu usage be implement like vulkan backend? sad to hear that , i have Instagram and whatsapp let me know what u can use .

Fcucgvhhhvjv commented 11 months ago

也许你想看看我的 colab 笔记本来看看它?你能告诉我哪个型号是原来的吗?它崩溃了 512 我尝试转换并检查它是否崩溃于 512

You can try this: https://huggingface.co/runwayml/stable-diffusion-v1-5

ill try that tonight .

Fcucgvhhhvjv commented 11 months ago

u can use any resource from this repo for your project, also here are the latest models u can use https://huggingface.co/Androidonnxfork/test/tree/main/fp16fullonnxsdquantized_in_ort , I'll be making the colab notebook work automatically so it can generate models without me having to do it regularly ,

ZTMIDGO commented 11 months ago

yeah it is inneed slower , 20 steps take like 3-5 min on 512 x 512 . can gpu usage be implement like vulkan backend? sad to hear that , i have Instagram and whatsapp let me know what u can use .

Hardware acceleration is currently unavailable, you can learn why here https://github.com/microsoft/onnxruntime/issues/15629

ZTMIDGO commented 11 months ago

u can use any resource from this repo for your project, also here are the latest models u can use https://huggingface.co/Androidonnxfork/test/tree/main/fp16fullonnxsdquantized_in_ort , I'll be making the colab notebook work automatically so it can generate models without me having to do it regularly ,

sure

Fcucgvhhhvjv commented 11 months ago

can models be loaded from external storage now? Also what about saving image and scheduler

ZTMIDGO commented 11 months ago

can models be loaded from external storage now? Also what about saving image and scheduler

Yes, models can now be selected externally

Fcucgvhhhvjv commented 11 months ago

thank you , ill build an app for testing . I need some help , the conversation script from Stable-Diffusion-ONNX-FP16 repo doesnt use gpu for conversion , also linux doesnt support direct ml so the conversion is super slow like 30 min per model , can you check the script or whats wrong? I tried to install onnxruntime-gpu and changed execution provoder to gpu but it didn't work in colab