yyang181 / colormnet

43 stars 3 forks source link

Some color artifacts #7

Closed dan64 closed 2 months ago

dan64 commented 2 months ago

Hello,

I thank you for your release, that is an improved version over NTIRE23-VIDEO-COLORIZATION. I tested some images and the results were satisfactory, but I found that this version is less robust in colouring the images respect to BiSTNet if the reference image is not exactly the first frame of the sequence.

For example, this is the image to be colored.

0000

if I use as reference frame this image

frame000

I obtain the following coloured image

0000

while using BiSTNet (which is generally worse in propagate the colours) I obtain the following image

0000

while If I use as reference image this picture

0000

I obtain the following proper coloured image:

0000

Given these results I guess that the ability of propagate the colours form a reference image that is different from the image to be coloured has been lost in exchange of a significantly better colour propagation. Could you confirm ?

Thank you, Dan

yyang181 commented 2 months ago

Just have a try adding your reference frame to the start of input frames. The existing code is designed under the assumption that the reference image precisely matches the first input frame. This means that the color channels (a and b in the Lab color space) from the reference frame are aligned with the luminance channel (L in the Lab color space) of the initial input frame.

Actually, ColorMNet offers a significantly enhanced feature matching capability thanks to its integration with a large-pretrained visual model, DINOv2. This approach provides a more discriminative guidance for feature estimation compared to BiSTNet, which utilizes VGG as its feature extractor. For further insights, refer to Figure 6 and Section 5 in the accompanying paper.

Additionally, we are planning to release a modified version of the code that will accommodate any type of reference image, offering greater flexibility and ease of use.

dan64 commented 2 months ago

Thank you for your clarification, my tests confirmed that Colormnet has improved significantly the colour matching and propagation. There are other tools that are able to provide proper coloured reference images, but they are unable to provide a temporal consistency and colour propagation. Colormnet is an important contribution for solving the problem of automatic colourization.

Thanks, Dan

yyang181 commented 2 months ago

Just have a try adding your reference frame to the start of input frames. The existing code is designed under the assumption that the reference image precisely matches the first input frame. This means that the color channels (a and b in the Lab color space) from the reference frame are aligned with the luminance channel (L in the Lab color space) of the initial input frame.

Actually, ColorMNet offers a significantly enhanced feature matching capability thanks to its integration with a large-pretrained visual model, DINOv2. This approach provides a more discriminative guidance for feature estimation compared to BiSTNet, which utilizes VGG as its feature extractor. For further insights, refer to Figure 6 and Section 5 in the accompanying paper.

Additionally, we are planning to release a modified version of the code that will accommodate any type of reference image, offering greater flexibility and ease of use.

Hi @dan64, I have updated the test code and it should support any exexmplar now with parameter --FirstFrameIsNotExemplar. Hope this helps.

dan64 commented 2 months ago

It worked!

Thanks, Dan

P,S. I noted that you renamed the option '--deoldify_path' in '--ref_path', you need to apply the same change also at row 39 of "test.py".

yyang181 commented 2 months ago

It worked!

Thanks, Dan

P,S. I noted that you renamed the option '--deoldify_path' in '--ref_path', you need to apply the same change also at row 39 of "test.py".

Thanks. I fixed it.