junyanz / pytorch-CycleGAN-and-pix2pix

Image-to-Image Translation in PyTorch
Other
22.71k stars 6.27k forks source link

Regarding Training in Colour spaces #881

Open kanlions opened 4 years ago

kanlions commented 4 years ago

Can anybody please help me how to train the models in YIQ, LAB, HSV colourspaces. I am not able to understand that simply reading the files and using BGR2HSV wont suffice I guess. What changes should be done as I am a beginner in this package.

junyanz commented 4 years ago

You need to modify two things: (1) the data loader and (2) visualization code. We have worked with Lab space for the colorization application. Step 1: When you load an image, you need to convert the color space. Here is an example. Step 2: When you visualize the results, you need to convert the color space back to the original space. Here is an example.

kanlions commented 4 years ago

Thank you for the guidance and quick reply. I had actually read the code snippets, the reason I posted the query is that in options folder in base options there are arguments input_nc and output_nc which mention 3 for RGB and 1 for gray scale. But for example if we want to attempt Lab to Lab translation then I guess I need to explicitly 2 and also another doubt I had for Lab to Lab translation I have to combine both data sets. I was wondering as all colour spaces have mostly 3 channels, so instead of tampering input_nc or output_nc can we pass YIQ, LAB or HSV instead of RGB and make changes in aligned_dataset.py file The problem is what type of transforms need to be done for training in different spaces.

junyanz commented 4 years ago

You can revise the data loader code, and use YIQ, LAB, instead of RGB. In this case, your input_nc and output_nc can be set as 3.