Nikronic / Deep-Halftoning

"Deep Context-Aware Descreening and Rescreening of Halftone Images" paper implementation.
MIT License
3 stars 1 forks source link

请教! #2

Open h20181007031 opened 1 week ago

h20181007031 commented 1 week ago

您好!抱歉打扰您一下,对于您的项目,我有一些问题想要请教您。您的项目是否是用深度学习网络的方式来生成半色调图像呢?就是不同于以往的传统半色调算法那样,而是采用模型的方式来实现半色调处理过程?

Nikronic commented 1 week ago

Hi,

Firstly, I am using Google Translate, so I might not be entirely accurate on understanding your question.

But about the (possible) answer, the original pipeline uses the classic halftone algorithms on Places365 dataset, then the hafltoned images are fed into the entire Deep Learning pipeline. In the end, the original image of Places365 dataset (before going through halftone) is compared to the output of model (autoencoder architecture).

In fact, the original title of the paper says "rescreening halftone images" which is the process of removing halftone. So, to generate the training data, classic halftone algorithms are used on normal images (Places365).

I hope it answered your question.

PS: I think I implemented the loss function a bit incorrect in terms of implementation, but the structure of the code should represent the paper's architecture correctly.

h20181007031 commented 1 week ago

Hello and thanks for your reply! 

It dawned on me. Your work, to me, seems to me to be the process of using deep learning models to achieve inverse halftones. 

My previous statement may not be complete, but in fact, my current work is to use deep learning models to implement the halftone process, so I would like to ask you for any suggestions for this process? So far, I've tested with trained models, and the results have been substantial.

The test results are shown in the pictures below.

Figure 1 is the original input image, and Figure 2 and Figure 3 are the output halftone images with different parameters set.

Hb @.***

 

------------------ 原始邮件 ------------------ 发件人: "Nikronic/Deep-Halftoning" @.>; 发送时间: 2024年6月28日(星期五) 凌晨3:46 @.>; @.**@.>; 主题: Re: [Nikronic/Deep-Halftoning] 请教! (Issue #2)

Hi,

Firstly, I am using Google Translate, so I might not be entirely accurate on understanding your question.

But about the (possible) answer, the original pipeline uses the classic halftone algorithms on Places365 dataset, then the hafltoned images are fed into the entire Deep Learning pipeline. In the end, the original image of Places365 dataset (before going through halftone) is compared to the output of model (autoencoder architecture).

In fact, the original title of the paper says "rescreening halftone images" which is the process of removing halftone. So, to generate the training data, classic halftone algorithms are used on normal images (Places365).

I hope it answered your question.

PS: I think I implemented the loss function a bit incorrect in terms of implementation, but the structure of the code should represent the paper's architecture correctly.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

Nikronic commented 1 week ago

@h20181007031

Yes, you are correct. The current implementation is reverse halftone. But the architecture and the methodology used should be the same for your purpose too. In simplest terms, you need to feed the original image (not processed at all) to your network, then compare the final output of the network to manually halftoned images generated via classical approaches.

But one question you should ask yourself constantly, and it is that why one should rely on a deep leaning (more resource hungry) model instead of classical approach.

Best regards,