Open BakaLee opened 10 months ago
Could you please add save output to gallery function. Why does the same seed and prompt give a totally different output? Why does some models crashes when set to 448x448 and 512X512?
models crash because it is converted without --fp16 --attention-slicing max . I have models that worked with 512 x 512 , u can try to convert model with --fp16 --attention-slicing max to see if it makes any difference
This is because 448 and 512 require larger memory to run. The program crashed because of memory overflow. I will add the ability to save pictures in the next few days.
Thank you both for the clarification.
This is an amazing project. Hoping that in next update it'll advance further with more inference speed optimization, fixed seed,
more sampling method, save to gallery function, and possibly imgToimg..
Could you please add save output to gallery function. Why does the same seed and prompt give a totally different output? Why does some models crashes when set to 448x448 and 512X512?
models crash because it is converted without --fp16 --attention-slicing max . I have models that worked with 512 x 512 , u can try to convert model with --fp16 --attention-slicing max to see if it makes any difference
Is it possible to convert model on device in Android? If yes how?
Could you please add save output to gallery function. Why does the same seed and prompt give a totally different output? Why does some models crashes when set to 448x448 and 512X512?
models crash because it is converted without --fp16 --attention-slicing max . I have models that worked with 512 x 512 , u can try to convert model with --fp16 --attention-slicing max to see if it makes any difference
Is it possible to convert model on device in Android? If yes how?
It's not but possible on Android but u can use Google colab and kaggle to do so . I have like 15-20 models already . If u want I can also put the link to colab notebook , for kaggle u need to wait because I don't have my device with me .
You can use any of these models , I tested some and it works great https://huggingface.co/Androidonnxfork/test/tree/main/fp16fullonnxsdquantized_in_ort . All these models are converted into ort so u can simply build the app with it
Or u can use this prebuilt apk and chose whatever model u wish to run https://huggingface.co/Androidonnxfork/test/resolve/main/2023-08-05%20-%20%20-%20Fork3sd%20-%20APK(s)%20debug%20generated%20(1).zip
@ZTMIDGO can these links be added to the readme ? https://huggingface.co/Androidonnxfork/test/resolve/main/2023-08-05%20-%20%20-%20Fork3sd%20-%20APK(s)%20debug%20generated%20(1).zip this is for the prebuilt apk that supports use of custom model from storage .
And these are all the models that I converted can be used with the apk (already in ort format) https://huggingface.co/Androidonnxfork/test/tree/main/fp16fullonnxsdquantized_in_ort
Could you please add save output to gallery function. Why does the same seed and prompt give a totally different output? Why does some models crashes when set to 448x448 and 512X512?
models crash because it is converted without --fp16 --attention-slicing max . I have models that worked with 512 x 512 , u can try to convert model with --fp16 --attention-slicing max to see if it makes any difference
Is it possible to convert model on device in Android? If yes how?
It's not but possible on Android but u can use Google colab and kaggle to do so . I have like 15-20 models already . If u want I can also put the link to colab notebook , for kaggle u need to wait because I don't have my device with me .
You can use any of these models , I tested some and it works great https://huggingface.co/Androidonnxfork/test/tree/main/fp16fullonnxsdquantized_in_ort . All these models are converted into ort so u can simply build the app with it
Can you please share the Collab and kaggle to convert the models.
@ZTMIDGO can these links be added to the readme ? https://huggingface.co/Androidonnxfork/test/resolve/main/2023-08-05%20-%20%20-%20Fork3sd%20-%20APK(s)%20debug%20generated%20(1).zip this is for the prebuilt apk that supports use of custom model from storage .
And these are all the models that I converted can be used with the apk (already in ort format) https://huggingface.co/Androidonnxfork/test/tree/main/fp16fullonnxsdquantized_in_ort
Thank you, I have tried out some of this models , but why does some models crashes in 512x512? Is there any tips to fix this issues?
Could you please add save output to gallery function. Why does the same seed and prompt give a totally different output? Why does some models crashes when set to 448x448 and 512X512?
models crash because it is converted without --fp16 --attention-slicing max . I have models that worked with 512 x 512 , u can try to convert model with --fp16 --attention-slicing max to see if it makes any difference
@BakaLee this is why some models crashed because without using autoslicing conversion time reduces drastically at the cost of higher memory usage . For the notebook I have to check if it works it will take a while for me to retrieve it because I don't have my phone . My kaggle account got banned so I have a one which is almost complete but needs some checks.
Could you please add save output to gallery function. Why does the same seed and prompt give a totally different output? Why does some models crashes when set to 448x448 and 512X512?
models crash because it is converted without --fp16 --attention-slicing max . I have models that worked with 512 x 512 , u can try to convert model with --fp16 --attention-slicing max to see if it makes any difference
@BakaLee this is why some models crashed because without using autoslicing conversion time reduces drastically at the cost of higher memory usage . For the notebook I have to check if it works it will take a while for me to retrieve it because I don't have my phone . My kaggle account got banned so I have a one which is almost complete but needs some checks.
Thank you brother, will be waiting for your Collab notebook. Hope you find it soon.
@BakaLee try this https://github.com/Fcucgvhhhvjv/Android-Stable-diffusion-ONNX/blob/master/Full_(fixed)(gpu)(custom)(withsafetensor)torch_2_1_premodel_conversion_script.ipynb .
Thank you so much for sharing this notebook but is there any tutorial video on how used this? I am very new to this stuff.
Could you please add save output to gallery function. Why does the same seed and prompt give a totally different output? Why does some models crashes when set to 448x448 and 512X512?