Closed KhoaVo closed 1 month ago
Hi KhoaVo,
Thank you for the implied compliment here :- 'awesome'.
In fact I have posted an outline Python implementation here.
https://github.com/chia56028/Color-Transfer-between-Images/issues/1
But it is the first and only piece of Python code that I have written so it probably needs someone with a strong Python background to 'knock it into shape'.
At the referenced I link I write the following.
_I have also attached here a file which implements in Python ‘Enhanced Image Color Transfer Processing’. It is a modification of the code that was posted here. https://github.com/jrosebr1/color_transfer It is the only Python code I have ever written so it is not suitable for formal issue, but if anyone wishes to produce a formal version by refining the code then they are welcome to do so._
I hope this is useful to you.
Thanks for the pointers!
I tried it out and it definitely works.
I can take a stab at cleaning it up.
However, I'd like to do one better and make that implementation more in line with Enhanced-Image-Colour-Transfer-2.
Can you take me what the key differences are between Enhanced-Image-Colour-Transfer and Enhanced-Image-Colour-Transfer-2? So I can hone in on those differences and make sure they end up in the updated python version.
Hi KoaVo,
I was pleased to learn that you looking to develop a robust Python implementation of Enhanced-Image-Colour-Transfer-2. I think you are likely to quickly acquire more stars than me on GitHub. But I would be happy to see that if your code generates further interest in the underlying processing methods that I have developed. Once you have posted your code, I will remove my outline implementation from GitHub and cross reference you from my repository.
In the next few days, I will re-familarise with the differences between 'Enhanced Image Color Transfer Processing’ and 'Enhanced Image Color Transfer Processing-2’ and I will try to write a clear summary of the differences.
Best Regards
Hi again KoaVo,
There are quite few differences between ‘Enhanced Image Color Transfer Processing’ and 'Enhanced Image Color Transfer Processing-2’ so I have decided to take them one at a time.
One difference is that the original performs processing in the Lab colour space whereas the recent processing uses the lαβ colour space. The following illustrates this.
Go to the Web App at https://www.dustfreesolutions.com/CT/CT.html Click ‘Reset to Reinhard processing parameters’ Set ‘CrossCovarianceLimit’ to 100% and set 'Colour space' to ‘CIE Lab’ This should now correspond to the processing currently undertaken by the current Python code. Select under Samples ‘Scottish Croft’ and click ‘Generate Output Image’. The recoloured image doesn’t look that good. Now set Colour space from ‘CIE Lab’ to ‘lαβ’ and click ‘Generate Output Image’. The recoloured image now looks much better.
So, the first step is to update the Python Code from ‘CIE Lab’ processing to ‘lαβ’ processing, (or at least to offer ‘lαβ’ processing as the preferred processing option if ‘CIE Lab’ processing is retained).
The existing code has the following 4 lines source = cv2.cvtColor(source, cv2.COLOR_BGR2LAB).astype("float32") target = cv2.cvtColor(target, cv2.COLOR_BGR2LAB).astype("float32") and transfer = cv2.cvtColor(transfer.astype("uint8"), cv2.COLOR_LAB2BGR) It would be great if we could just write source = cv2.cvtColor(source, cv2.COLOR_BGR2 lαβ).astype("float32") etc
Unfortunately, open cv2 doesn’t offer the option ‘cv2.COLOR_BGR2 lαβ’ so it is necessary to write a function which implements this process.
These are the functions in C++ which need to be converted to Python.
// ########################################################################## // ##### IMPLEMENTATION OF L-ALPHA-BETA FORWARD AND INVERSE TRANSFORMS ###### // ########################################################################## // Coding taken from https://github.com/ZZPot/Color-transfer // Credit to 'ZZPot'. // I take responsibility for any issues arising my adaptation.
// Define the transformation matrices for L-alpha-beta transformation.
cv::Mat RGB_toLMS = (cv::Mat
// Swap channel order (so that transformation
// matrices can be used in their familiar form).
// Then convert to float.
cv::cvtColor(input, img_RGB, CV_BGR2RGB);
img_RGB.convertTo(img_RGBf, CV_32FC3, 1.0/255.f);
// Apply stage 1 transform.
cv::transform(img_RGBf, img_lms, RGB_to_LMS);
// Define smallest permitted value and implement it.
float epsilon =0.07;
cv::Scalar min_scalar(epsilon, epsilon, epsilon);
cv::Mat min_mat = cv::Mat(input.size(), CV_32FC3, min_scalar);
cv::max(img_lms, min_mat, img_lms); // just before log operation.
// Compute log10(x) as ln(x)/ln(10)
cv::log(img_lms,img_lms);
img_lms=img_lms/log(10);
// Apply stage 2 transform.
cv::transform(img_lms, img_lab, LMS_to_lab);
return img_lab;
}
// lαβ2BGR
cv::Mat convertFromlab(cv::Mat input) { cv::Mat img_lms (input.size(), CV_32FC3); cv::Mat img_RGBf(input.size(), CV_32FC3); cv::Mat img_RGB (input.size(), CV_8UC3); cv::Mat img_BGR (input.size(), CV_8UC3); cv::Mat temp (LMS_to_lab.size(),CV_32FC1);
// Apply inverse of stage 2 transformation.
cv::invert(LMS_to_lab,temp);
cv::transform(input, img_lms, temp);
// Compute 10^x as (e^x)^(ln10)
cv::exp(img_lms,img_lms);
cv::pow(img_lms,(double)log(10.0),img_lms);
// Apply inverse of stage 1 transformation.
cv::invert(RGB_to_LMS,temp);
cv::transform(img_lms, img_RGBf, temp);
// Convert to integer format and revert
// channel ordering to BGR.
img_RGBf.convertTo(img_RGB, CV_8UC3, 255.f);
cv::cvtColor(img_RGB, img_BGR, CV_RGB2BGR);
return img_BGR;
}
At first this looks quite daunting, but it should not be that difficult because much of it requires a C++ open cv function to be replaced by its Python form.
For example the C++ function
cv::transform(img_RGBf, img_lms, RGB_to_LMS); becomes cv2.transform(img_RGBf, RGB_to_LMS, img_lms)
I have included the ‘Scottish Croft’ in the images folder here so you can test that the processing looks correct (in accordance with the web app as described). Initially, you can use a pair of identical input images and check that you get an identical output image.
Don’t hesitate to ask if you have any queries. Otherwise, please give me some advance warning when you are ready to add further new processing.
Best Regards
Is the idea that you want to be able to run the algorithm from python? Or what would be the reason why you'd like to see it implemented in python?
I only ask because it may be easier to create bindings for python than to rewrite it in python, or another more lightweight approach, depending on your rationale.
Hi meta-m-s,
I have noticed that Python programs on GitHub attract a lot more attention than C++ programs. So, it would be nice to propagate my ideas further by publishing a Python version of the processing which I hoped ‘KhoaVo’ might do. At present though I am more interested in generating new software rather than converting older software into another language. (Though my current C++ project is a private repro shared with Mr Renzullo.)
Regards
Terry Johnson
On Tue, 5 Dec 2023 at 20:30, metamaterialsuit @.***> wrote:
Is the idea that you want to be able to run the algorithm from python? Or what would be the reason why you'd like to see it implemented in python?
I only ask because it may be easier to create bindings for python than to rewrite it in python, or another more lightweight approach, depending on your rationale.
— Reply to this email directly, view it on GitHub https://github.com/TJCoding/Enhanced-Image-Colour-Transfer-2/issues/4#issuecomment-1841566695, or unsubscribe https://github.com/notifications/unsubscribe-auth/AO3CSO2EE3TPWZEVTYDJNFDYH577VAVCNFSM6AAAAAA37M74LGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBRGU3DMNRZGU . You are receiving this because you commented.Message ID: @.***>
Ah, I was talking to @KhoaVo, but, in terms of interest, that's probably true.
I will leave this issue open for a short while in case @KhoaVo, wishes to respond and will close it in due course
No further dialogue. Issue is now discounted.
I've stumbled this repo recently and found it extremely great and easy to understand. In the mean time, I've converted it to Python using Numpy and OpenCV. main.txt
Hi Minh,
Thank you for your interest and thank you for taking up the challenge.
I have looked through your code and it seems to be accurate and of a good standard. However, I suppose I should try to run your program and provide feedback (if I can get to grips again the Python environment thank goodness for Chat-GTP!!) I will try to do this in the next few days.
I have now compared the Python and the C++ versions and found that they are pretty close.
However, the functionality of the Python ‘FullShading’ routine seems to differ from that of the C++ version. It seems that, to compensate for this, some processing parameters in the Python program have been tweaked.
I tried the following. Revert the Python code so that the processing parameters match those of the C++ code. (Revert 0.01 back to -1.0 in the call to _‘coreprocessing’ and revert ‘1.5’ back to ‘1.0’ in the call to _'finaladjustment'.) Run both versions with _‘extrashading’=’False’ in _‘fullshading’. Run both versions with _‘extrashading’=’True’ in _‘fullshading’.
The results are identical when _‘extrashading’ is set to ‘False’ but not when it is set to ‘True’. I have yet to find the coding difference that cause the results to be different when extra shading is applied.
Thank you for the response. I will revise the code again once I have the chance to access it.
Just to add a few more observations.
For ‘shading on’, the CPP image above looks less natural than the Python image. This could be just an unlucky choice of images but it could be that the CPP version is not implementing the intended processing whereas the Python version is. Either way they seem to differ and the cause of the divergence needs to identified.
It should be noted that, in the Python code, _shaderval is set to 0.5 more than once. If the user tried to use a different value, but only identified and changed one instance then this would cause inconsistent processing. It is for this reason that, in the C++ version, parameters are set once only in a block at the start. There may be parameters other than _shaderval that are similarly affected.
_minval should perhaps be 1.0 rather than 1/255.0 because the data range in the Python version is 0 to 255 rather than 0.0 to 1.0?
With 1.5 reset to 1.0 and 0.01 reset to -1.0 the final two display images will be near identical. Probably better to have just the target image, the source image, the interim image and the final image, each labelled as such.
I think the problem is most likely due to numerical issue and multiple round-off error. As I cannot run the CPP version to compare 2 version step by step and doesn't have much free time lately, I will leave it for anyone who might be interested in correcting this. I have refactor the testing code a little bit and will attach a gist here for convenient viewing: https://gist.github.com/minh-nguyenhoang/2e06aef032caa2d525822230cbef66cc
Hi Minh,
You are right on both counts. It did take a long time to determine the difference between the two code versions and it was to do with round off errors (or more accurately clipping).
In the C++ version I deliberately chose to take in a uint8 image, to perform all processing in floating point and to only convert back to a uint8 image at the last step. The draft python code clips the data to 0-225 at various times and converts to uint8 at intermediate stages.
My intention was to demonstrate the method in its purest and most accurate form. There may well in fact be good reasons, in terms of storage and speed to use uint8 but this is an implementation compromise that would need to be evaluated.
I will post the ‘corrected’ code shortly. The outputs now align for both implementations
FOR THE ATTENTION OF Minh Nguyen Hoang
Hi Minh,
Please find below my preferred python implementation.
I would like to invite you to post this as a repository on your GitHub account.
If you do not wish to, or if you do not reply by Sept 20, then I will post the code alongside the Main.cpp in this repository.
I would like to propose your coding for inclusion at the link below (attributable to yourself but I will make the proposal).
https://github.com/ImmersiveMediaLaboratory/ColorTransferLib
I would also like to address the issue here using your coding for L-alpha-beta again attributable to yourself.
https://github.com/ImmersiveMediaLaboratory/ColorTransferLib/issues/1
Revised code ….
Thank you for your efforts here which are greatly appreciated.
Hi Terry,
I have added a redirect in this repository to Enhance2 in
https://github.com/minh-nguyenhoang/Enhanced-Image-Colour-Transfer-2
I have added it as a python program which includes details as comments but will also open the relevant web pages if run.
Thanks again to Mihn
PS If you ever reconfigure your set up so that my redirect is wrong then please reopen this issue with the relevant details.
This would be amazing if it could have a python implementation. It would make it a lot more accessible to people and further the integration opportunities.