vitoplantamura / OnnxStream

Lightweight inference library for ONNX files, written in C++. It can run SDXL on a RPI Zero 2 but also Mistral 7B on desktops and servers.
https://yolo.vitoplantamura.com/
Other
1.82k stars 79 forks source link

Logging needed #18

Open ThomAce opened 11 months ago

ThomAce commented 11 months ago

Hi,

My idea is to implement some sort of logging mechanism. I added my own taste of course, but would be benefitial to implement a light logging. Would you consider to add it, @vitoplantamura ?

Main purpose of the logging: I can save automagically what I have been executed, created. This helped me in case of a previous kernel panic in RPi to "re-generate" the image.

void writeLog(std::string lines)
{
    try
    {
        std::ofstream fw("log.txt", std::ofstream::app);

        if (fw.is_open())
        {
            fw << lines << "\n";
            fw.close();
        }
        else std::cout << "Problem with opening file" << std::endl;
    }
    catch (char const* msg)
    {
        std::cout << msg << std::endl;
    }
}

in main:

std::cout << "----------------[start]------------------" << std::endl;
    writeLog("----------------[start]------------------");
    std::cout << "positive_prompt: " << positive_prompt << std::endl;
    writeLog("positive_prompt: " + positive_prompt);
    std::cout << "negative_prompt: " << negative_prompt << std::endl;
    writeLog("negative_prompt: " + negative_prompt);
    std::cout << "output_png_path: " << output_png_path << std::endl;
    writeLog("output_png_path: " + output_png_path);
    std::cout << "steps: " << steps << std::endl;
    writeLog("steps: " + std::to_string(steps));
    std::cout << "seed: " << seed << std::endl;
    writeLog("seed: " + std::to_string(seed));

    std::cout << "----------------[prompt]------------------" << std::endl;

    std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now();

    auto [cond, uncond] = prompt_solver(positive_prompt, negative_prompt);

    std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();

    std::cout << "DONE!     " + std::to_string(std::chrono::duration_cast<std::chrono::seconds>(end - begin).count()) + + "s" << std::endl;

    std::cout << "----------------[diffusion]---------------" << std::endl;
    ncnn::Mat sample = diffusion_solver(seed, steps, cond, uncond);
    std::cout << "----------------[decode]------------------" << std::endl;
vitoplantamura commented 11 months ago

hi,

I guess you can already achieve this functionality with stdout redirection, i.e. with the ">" operator.

Furthermore, with the "tee" command you can save to file and simultaneously view on screen.

What do you think about it? Is this enough?

Vito

PS My RPI Zero 2 has never crashed while generating an image. At most, if there is insufficient memory, the OnnxStream process is killed and the "Killed" message is displayed. Maybe, you could try a different power supply...

ThomAce commented 11 months ago

I can achieve that but I wanted to use a bit different approach because I wanted to use some more sophisticated logging. But, nevermind, I made my own fork which contains this change. The next thing I wanted to implement is the seed as parameter. Now, I'm working on a very simple web-page for making the inputs, etc... What you have done is briliant by the way and thank you again!

This is what I have made with my rpi. (some color retouch done)

robot_enhanced

In the meanwhile I'm testing the webpage. kép

Ps.: My Pi crashed for unknown reason this time. Not really happening this. One or two times I accidentally pulled out the power cord, but this time I have seen a sudden reboot. I have not found the root cause for that, yet. But, not even important too much as of now.

vitoplantamura commented 11 months ago

The image is really cool and the idea of ​​making a webpage is really nice!

Keep me updated on your progress,

Vito

ThomAce commented 11 months ago

Glad to hear that you like it. First test succeeded. Next step:

As further steps for refine, I need to implement the custom seed functionality to the SD XL as well.

Using a template from Creative Time page, but in the background it is relying on PHP and shell scripting.

kép

vitoplantamura commented 11 months ago

Very very cool!

Vito

ThomAce commented 11 months ago

Hi Vito,

The testing is going really good. This was generated on my Pi 4 during the night with 30 iterations. Now, I'm trying to refine it. Some function optimization is still needed on the webpage, but I'm on a good track. As next steps, I'm going to make it working on Windows as well. (command execution method needs to be added...) Also, still needs to modify the SD XL1.0 seed option. But, my fork is working pretty well and the executable is very small. (~760KByte)

kép

ThomAce commented 11 months ago

First generated image with XL: kép Second one generated overnight with RPi: kép Refined a bit on Windows (30 steps): kép

Testing the webpage on RPi at the moment. Project Save / Load / New working just like: Image Download (most obvious stuff), Refine, Delete (image) features. So far everything works as expected. Would you like to test it?

vitoplantamura commented 11 months ago

Very very cool indeed!

Can you put everything in a repo?

Unfortunately I am very busy at the moment, so I cannot guarantee you that I will be able to test it immediately. However, if you publish the project on GitHub, I can include a link to your repo in the main README of OnnxStream (in a section with a name like "Related Projects")!

Thanks, Vito

ThomAce commented 11 months ago

Hi,

That would be awesome!

I'll make new repo as soon as I finish with this. But, I'm very busy as well so I can do these things only late evenings. In parallel to the web interface, I'm making one with similar look in Python.

kép

vitoplantamura commented 11 months ago

cool!

Obviously there's no rush: when you finish, post the link here and I'll add it to the README!

Vito

ThomAce commented 11 months ago

Sure. :)

I'm near to adding it as pre-alpha stage.

Windows: kép

Linux (Raspbian): kép

vitoplantamura commented 11 months ago

Regarding the Desktop application, what technology/framework do you use? Maybe Electron?

Vito

ThomAce commented 11 months ago

For maximum compatibility and lightweight, portable, opensource reasons I'm using Python. I would like to build it in C#, but as easier portability I decided to make it in Py. :) Also it is fun-experiment and learning.

ThomAce commented 11 months ago

Hi @vitoplantamura,

You can check it here:

https://github.com/ThomAce/OnnxStreamGui

At the moment, on Windows testing is ongoing. Linux testing will be taken in place at the evening.

Now, I tested it only with SDXL, but I do not expect any major differences...

First, at first start, Settings must be used (top button).

Added direct button to your github repo as well. If you like, you can use the attached main photo.

vitoplantamura commented 11 months ago

Very cool!

I am particularly interested in the Web version.

As soon as I have a chance, I plan to try it. In the meantime I will add the link to the OnnxStream README!

Vito

ThomAce commented 11 months ago

Very cool! I am particularly interested in the Web version.

Great. In the web version, you need to consider the followings:

By the way, the next thing I wanted to integrate to the Python version is an integration with ChatGPT to generate pos and neg prompts based on user inputs. :) I'll work on this later on / weekend.

ThomAce commented 11 months ago

@vitoplantamura You can test new. I have updated the web version just right now. SH and all other stuffs fixed, SD vs SDXL switch implemented...

Usage: Open config.php with your fav text editor and edit these three lines accordingly:

$sd = "/media/thomace/EXT/OnnxStream/src"; $sdxl = "/media/thomace/EXT/stable-diffusion-xl-base-1.0-onnxstream";

$sd_shellscript = "/var/www/html/sd/sd.sh";

ThomAce commented 11 months ago

Complete fork of your repo now available:

https://github.com/ThomAce/OnnxStreamPlus/

Tested with SD and SDXL both on RPi and 2 Windows machines

Web based gui is working as well.

OnnxStreamGUI desktop will be updated in the next minutes. https://github.com/ThomAce/OnnxStreamGui

kép

vitoplantamura commented 11 months ago

really nice!!

Vito

ThomAce commented 11 months ago

Thank you. I have changed your OnnxStream and implemented full seed parameterization to be able to re-generate exactly the same image by seed number. Still testing it but it looks perfect. I might remove the "Refine" button because it become obsolete with the implemented features.

Now, I'm thinking about how to utilize the gpu capability (no, not discrete gpus, but GPU's float processing speed might be faster than CPU). Also thinking about to applying memory utilization targeting. Like: a cmd parameter like -maxmem "4G"... What do you think about these?

vitoplantamura commented 11 months ago

hi,

regarding "maxmem", I don't think it's a simple thing to do to limit the memory used dynamically. We could instead specify for the three modes "--rpi", "--rpi-lowmem", "" the memory used for SD 1.5 and SDXL, in an HTML table in the main README :-)

regarding the GPU, I'm thinking what is the best thing to do. I'm actually thinking these days what the overall direction of the project will/should/could be :-)

Thanks, Vito

ThomAce commented 11 months ago

Hi,

Yeah. I realized while code optimization here and there. A dynamic data handling routine should be implemented but that would give some computing overhead as well. The OnnxStream used around max 1.9Gig ram on my system so it was not a big deal. I tried it on my RPi as well (RPi 4B 4Gib) without a single issue.

Please keep me updated because I'm really curious of that as well. I made modifications (not 100% ready yet) to make the same image with the same input by giving the same seed number. Fine tuning others' codes is not easy and straightforward some times. :)

ThomAce commented 11 months ago

Hi @vitoplantamura , After a bit of optimization and very light customization, I get these results. Just thought you would be interested. Prompt runs about 2seconds long now. On R9 6900HX it is about 12 secon per steps (w/ PBO) and about 20 sec per steps (w/o PBO).

First generated with 30 steps on seed 1515530087 (just for reference) in 1 hour 12 minutes.

android woman 30steps 1515530087 seed 1hr 12min

Second generated with 40 steps on seed 1515530087 (yes, the same) in 1 hour 40 minutes.

android woman 40steps 1515530087 seed 1hr 40min

What do you think?

vitoplantamura commented 11 months ago

the ability to set the seed is useful!

It also seems to be a feature particularly requested by users.

Would you consider the possibility of a PR?

Thanks, Vito

ThomAce commented 11 months ago

Hi. What is PR? Sorry, I don't know what is it reffering to. You can find my optimized version (including seed) here:

https://github.com/ThomAce/OnnxStreamPlus

According to my measurements, it gives a slightly better performance, but not too much. In longer (up to 30 steps, long prompts) diffusion is more visible. You can check what I changed in the codes because some of the code segments kept there just commented out.

70 steps 1515530087 seed

android-70steps-1515530087_final

vitoplantamura commented 11 months ago

Pull Request :-)

or if you prefer I can integrate this functionality from your code.

Let me know,

Thanks, Vito

ThomAce commented 11 months ago

You can integrate it from my code of course. :) It is basically the same as yours, just very light changes implemented. After all, your code will be equal with mine. :)