zkmkarlsruhe / ofxTensorFlow2

TensorFlow 2 AI/ML library wrapper for openFrameworks
Other
109 stars 16 forks source link

example_video_and_image_colorization #25

Open Jonathhhan opened 2 years ago

Jonathhhan commented 2 years ago

This was not easy for me (to find and understand a pretrained colorization model). With this pretrained model I got it working (had to convert it to a saved model): https://github.com/EnigmAI/Artistia/blob/main/ImageColorization/trained_models_v2/U-Net-epoch-100-loss-0.006095.hdf5 It tried it with this model before, but it seems they use two models together (do not really understand it yet): https://github.com/pvitoria/ChromaGAN

I converted the model like this (with Python):

from tensorflow.keras.models import Model, load_model
model = load_model(MODEL_FULLPATH)
model.save(MODEL_FULLPATH_MINUS_EXTENSION)

Anyway, here is the example: https://github.com/Jonathhhan/ofxTensorFlow2/tree/example_video_and_image_colorization/example_video_and_image_colorization

bytosaur commented 1 year ago

Hey @Jonathhhan,

great work. I have adjusted a few things and wanted to push the changes soon. However, i have noticed that videos didnt look that good so i digged into the python code for inference. I saw that the authors are dividing by 256 then converting from RGB 2 LAB and finally taking the first channel as an input to the model. You are doing a division by 2.55 on the first channel. Could you please elaborate on that? Thanks :)

Jonathhhan commented 1 year ago

@bytosaur thanks for the hint. I will have a look into that.

Jonathhhan commented 1 year ago

@bytosaur I made a version that converts to lab color space (and some other improvements): https://github.com/Jonathhhan/ofxTensorFlow2/blob/example_video_and_image_colorization/example_video_and_image_colorization_2/src/ofApp.cpp And I think it has a better result. The idea with dividing by 2.55 was to get the lab brightness value (0 - 100) without color space conversion (but I got rid of it).

bytosaur commented 1 year ago

this example did not make to #29 since the results weren't that useful. Anyway, thanks a lot for the contribution.

Jonathhhan commented 1 year ago

@bytosaur no problem, i already know it. just out of interest: is the colorization itself a boring usecase, or is the result not good enough (it was trained with hitchcock movies, if i am right, and especially coloring sky and water works really bad - coloring skin and trees for example much better)? thanks for improving and including the other 2 examples (i would love to see more examples from other users, for a better understanding of how to implement different networks with ofxTensorFlow2)...

bytosaur commented 1 year ago

hey @Jonathhhan, no i actually think the use case is OK, just the quality for video was poorly. But yeah, in my eyes OpenFrameworks is useful for realtime applications, where image colorization may not be very interesting. However, Yolov4 is quite cool and we are already using it for fun projects :)

Still I want to keep the thread open at least for a while.

danomatika commented 1 year ago

Apropos the Yolo example, we will put a related project on Github soon: YoloOSC

enohp ym morf tnes

Dan Wilcox danomatika.com robotcowboy.com

On Sep 14, 2022, at 5:09 PM, paul @.***> wrote:

 hey @Jonathhhan, no i actually think the use case is OK, just the quality for video was poorly. But yeah, in my eyes OpenFrameworks is useful for realtime applications, where image colorization may not be very interesting. However, Yolov4 is quite cool and we are already using it for fun projects :)

Still I want to keep the thread open at least for a while.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.