gaugo87 / PIXEstL

A program for creating color lithophanies and pixel images
MIT License
55 stars 10 forks source link

Using A Colorimeter For Palette Generation #13

Open MadMax389 opened 2 months ago

MadMax389 commented 2 months ago

So I've been obsessing with the ability to print color-accurate lithophanes with only CMYK filaments over the last few months. I have created over 30 palette files using different methods to generate the HSL/HEX codes. After hundreds of prints and a lot of trial and error, here is a summary of what I've learned so far:

  1. I'm using only CMYK colors from Bambu (CMYK bundle) and similar colors from IIIDMax (Fuchsia, Light Blue, Yellow and White). I am really focusing on only using 4 colors so that my lithophanes are printable by anyone with 4 color capability like a Bambu with AMS or with Mosaic Palette.
  2. 7 Layers at .07mm layer height seems to work best for me. I've tested 5,6,7,8,9 and 10 color layers. I don't see much change in quality above 7 layers. .07mm layers are easily printable with Bambu printers and yield a color section close to .5mm thick.
  3. I've started printing the color layers separately from the texture layer. Sometimes better quality can be achieved by printing the texture layer vertically. Flat vs vertical image image
  4. I have found that color accuracy is very dependent on color-code selection and that no single palette is suitable for all types of lithophanes. Natural landscapes and photographs of people are the most challenging because you can instantly tell that a color is not correct.
  5. I have tried various swatch types and various methods to measure the HSL values. Thanks to @gaugo87 for adding the ability to use HEX values in addition to HSL values. That has reduced the time required to generate a color palette. Initially, I explored various methods to analyze a picture of a swatch, but ran into many inconsistencies related to light source, white balance and pixel sampling. I'm convinced the camera in my phone does not always capture the true color.

Point 5 led me to invest in a (somewhat) inexpensive standalone colorimeter, thinking that I could measure actual colors with the actual light source. I've tested various swatch designs and settled on a swatch with a .15mm white backing layer and 10 successive color layers at .07mm. I was also able to test different light sources and different temperatures and intensities as well as just a white paper background. image image image

These lithos were generated using the swatch on a simple white background with no backlight. Bambu CMY and IIIDMax White.

image image

I have been really pleased with the results and have been focusing on generating palettes optimized for portraits of people and landscapes.

Tagging @dts350z and @ibgregr because I thought you might be inetersted.

gaugo87 commented 2 months ago

Well done on your results. It is important to remember that the program also has limitations. The color calculation is not optimal. It does not take color transmission into account. For example, Yellow + Blue will not give exactly the same color as Blue + Yellow, but for my program, it does. I tried to calculate it using the Lambert-Beer law, but it’s not perfect either.

At the end of last year, I changed my method: I modified the program to generate a matrix containing all possible combinations. The matrix is to be printed and then photographed. The program then analyzes the photo and retrieves all the true colors. No more need for approximate calculations: we get the true colors directly!

matrix6

matrix5

Unfortunately, in practice, I have not been able to fully implement this method. I cannot extract the true colors from the matrix through photography. In use, I end up with many false colors.

Unable to finalize the method, I lost motivation and put the program aside. I might commit the changes to a development branch. Maybe one of you can manage to print a good matrix and extract the correct colors...

fischma3SICKAG commented 2 months ago

@MadMax389 your images look amazing!!! I am struggling also o get a "real color" image with my smartphone camera and think about a colorimeter. But for me it’s still too expensive. Would you share your 7 Layers at .07mm layer height CMYK color palette?
And yes I know that my result would not be as good as yours if I do not measure the colors in my own environment but I think at can bring me instantly in the near to your results.

MadMax389 commented 2 months ago

@gaugo87 Very ambitious. Your algorithm has produced the best results of any lithophane generators that I have tried. I have found that using the colorimeter reduces variability in color measurement. The values are very repeatable. I had tried measuring the colors using the back light frame with different light intensities with varying results. I finally settled on measuring the swatch against a plain white poster board with ambient lighting. I had tested various ambient light conditions with no variability in the reading. Ambient light source can be eliminated as a variable..

I have noticed that I get repeated values on successive layers depending on the color. e.g. White yielded only two unique colors out of 10 layers. Magenta yielded 6 unique colors, but data is data and it works fine. image image Thanks again for your efforts.

MadMax389 commented 2 months ago

@fischma3SICKAG I have attached my CMYK .json file. Be aware that I used IIIDMax White PLA+ instead of Bambu. (I ran out of Bambu). There are 7 color layer codes for each color at .07mm. Image selection is important for CMKY lithos. Bright contrasting, multi-colored images work best. Photographs are difficult because the skin tones aren't quite right. Post your results!

cb-Paper-Lh.07-Bam-CMY-3D-K.json

ibgregr commented 2 months ago

@fischma3SICKAG I have attached my CMYK .json file. Be aware that I used IIIDMax White PLA+ instead of Bambu. (I ran out of Bambu). There are 10 color layer codes for each color at .07mm, so you can set the number of layers in the execution argument. Image selection is important for CMKY lithos. Bright contrasting, multi-colored images work best. Photographs are difficult because the skin tones aren't quite right. Post your results!

cb-Paper-Lh.07-Bam-CMY-3D-K.json

@MadMax389 - Thanks for tagging me in your initial post. Much appreciated. I've had to put playing with all of this kind of on the back burner due to some health issues with my wife. But I still follow the posts and tinker with it every now and then. I'm hoping to put more time into this in the next couple of months. I worked for a couple of weeks with dts350z trying to see how we could get more accurate results. I have come to somewhat the same conclusion as you regarding having multiple palette files as I could not find the "holy grail". I purchased a colorimeter as well to help with creating palettes. Here is my main issue...I can get a very nice palette config and the color preview generated can look almost EXACTLY like the input image...but when you print it the colors are off. So there is a disconnect between the preview and actual results. So I still have not wrapped my little mind around this. I do not know if I can ever expect the preview image generated to match the printed results and if that's the case then the preview is not of much help. I also do not understand why we can't have that single palette that "just works". Not being any kind of photography expert or color space expert, etc this all just throws me into a tizzy and then I get frustrated. Sounds like that's what happened to gaugo87 as well and why he put it on the back burner.

I also do not quite understand this new method that gaugo87 described above or how that would/could work with all of this. I would love for there to be a way to not have to spend all this time on palette files as it's very time consuming. And they the time it takes to test a print just to find things are not quite right.

Lastly, I will try to give your CMYK palette file a try in the near future. I typically have been using 7 or 8 colors which becomes even more of a task to create a good palette file. But I have gotten a few great results when I hit that palette "jackpot" :) One thing I noticed in your palette file, which is not an issue for me, is that you stated there are 10 color layer codes (layers) for each filament but there are only 7. I have never tried to use more than 7 anyway.

MadMax389 commented 2 months ago

My condolences for your wife's health issues, I know what that is like. My apologies, you are correct, there are only 7 layers. I guess my brain blanked from all the testing. I had created a spreadsheet where I can enter 10 hex code values from the colorimeter. I made a simple concatenation to create a line of code for each color for the .json file between 4 and 10 layers. I simply copy the correct lines and paste it into a blank .json file. I've gotten the time it takes to generate a configuration file with combinations of layers and colors down to a few minutes. Let me know if you would like a different layer count (at .07mm LH). image

ibgregr commented 2 months ago

My condolences for your wife's health issues, I know what that is like. My apologies, you are correct, there are only 7 layers. I guess my brain blanked from all the testing. I had created a spreadsheet where I can enter 10 hex code values from the colorimeter. I made a simple concatenation to create a line of code for each color for the .json file between 4 and 10 layers. I simply copy the correct lines and paste it into a blank .json file. I've gotten the time it takes to generate a configuration file with combinations of layers and colors down to a few minutes. Let me know if you would like a different layer count (at .07mm LH). image

Thanks for the well wishes...fingers are crossed that we are on the full mend at this point!!

I do not need anything more than 7 layers. But do you happen to have palette info for anything beyond the CMYK? I have most of the PLA colors from Bambu for the basic and matte filaments and have found that having some other colors definitely makes a huge impact. Especially having Beige.

Here is my best print to date. Taking a pic with is iPhone does not come out looking like it does in person. Not sure what the phone is doing to it but it looks 10 times better in person. This was using 8 colors....CMYK + Scarlett Red + MistleToe Green + Ice Blue + Beige:

Original Photo:

eastpoint4

Generated Preview:

image-color-preview

Printed Result:

IMG_0656 copy

I was happy with the result but it's still not exactly where I want it to be. For example, the sky is a darker blue than the original or preview. The potted plants are not as red as the photo or preview, etc. This goes back to what I was saying about the generated preview. I would like to get to the point that I can trust that the printed result will look very close to the preview.

MadMax389 commented 2 months ago

I don't have any other Bambu colors. I mostly use IIIDMax. I've measured light blue, fuchsia and yellow so far. Your lithophane looks pretty dang good. You might want to increase your flow ratio on the top layer by 10% or so to fill in the line gaps on the white.

I have been looking at the preview vs actual prints as well, just for CMYK filaments though. I created a standard color bar image using "pure" colors shown in the pic.

image

In this pic, I'm just comparing the preview of profiles from 7 to 10 layers against the source image. The violet, indigo and magenta seem the most problematic.

image

Then I compared the preview and the source image to the actual print. I printed just the color layers without the top texture layer. Ignoring any color inaccuracies from my camera, the actual printed image doesn't really match the preview or the source, lol. My takeaway is that the preview is good for a macro analysis, like is my sky purple instead of blue. I don't use the preview to try to discern tones of the same color. The only way to evaluate it is to print it, in my experience.

image

PS Here's a preview of your image with my CMYK profile. I'll print it on the A1 Mini and see what it looks like. image

ibgregr commented 2 months ago

I got that same preview, of course, when I used your profile. And that will likely print fairly well but just does not have the shading I want. And, the printed result will likely not resemble the preview very closely as you know. And again, that's what is frustrating to me. I can tweak profiles all day long but I cannot count on the preview to show me what I will actually get. If it did, then tweaking these profiles would be ALOT easier.

Another thing I noticed a while back is the Lithophanemaker.com prints the color layers in a somewhat different order than PIXEstL does. And like gaugo87 stated, blue on top of yellow yields slightly different results that yellow on top of blue, etc. So some of the colors seem to come out a bit better with that tool (using CMYK of course). But it appears that the processing on LM super saturates some of the colors when it massages the input photo.

My goal, which I don't think I will ever achieve, would be to take a photo and do ZERO touch up on it and feed that into PIXEstL with a profile that "just works". The ultimate thing would be for me to be able to provide a profile with ALL of my colors/layers and have it decide which 4 or 8 of those filaments to use to get the best results (I know..I'm dreaming).

dts350z commented 2 months ago

One issue with this approach is that the luminosity information is being printed in BOTH the color layers and texture layers. This is going to result is some washed out color, when combining the color and texture. I have a python GUI app to address that, and it also has some controls to boost/adjust color and attempt to deal differently with black background (with hidden color) astro photos vs. daytime photos. It is not 100% but I will share it as is for now (perhaps later today). I also have some command line python, and a process for determining which filaments (out of all the ones you have on hand) to use for a given image. More on that (and sharing) later as well.

MadMax389 commented 2 months ago

My goal, which I don't think I will ever achieve, would be to take a photo and do ZERO touch up on it and feed that into PIXEstL with a profile that "just works". The ultimate thing would be for me to be able to provide a profile with ALL of my colors/layers and have it decide which 4 or 8 of those filaments to use to get the best results (I know..I'm dreaming).

Certainly a worthy goal. It sounds like @dts350z's work might help with that. My goal is a bit different. I like to share my lithos on MW and want the best looking litho possible with just CMYK colors, yielding reasonable slice and print times.

dts350z commented 2 months ago

For instance, here's our test image with all the luminance information removed:

test-color-layer

I would propose that that version be used as the input for the color layers, and the original be used for texture. Replacing the Gray pixels, in the color only version, is one thing that needs some work in my python app.

MadMax389 commented 2 months ago

Another thing I noticed a while back is the Lithophanemaker.com prints the color layers in a somewhat different order than PIXEstL does. And like gaugo87 stated, blue on top of yellow yields slightly different results that yellow on top of blue, etc. So some of the colors seem to come out a bit better with that tool (using CMYK of course). But it appears that the processing on LM super saturates some of the colors when it massages the input photo.

I never had very good results with LM.com's profiles. image

ibgregr commented 2 months ago

Another thing I noticed a while back is the Lithophanemaker.com prints the color layers in a somewhat different order than PIXEstL does. And like gaugo87 stated, blue on top of yellow yields slightly different results that yellow on top of blue, etc. So some of the colors seem to come out a bit better with that tool (using CMYK of course). But it appears that the processing on LM super saturates some of the colors when it massages the input photo.

I never had very good results with LM.com's profiles. image

I totally agree. I wasn't trying to imply that LM did a better job by any means. Just that it has a different approach in laying down the colors on each layer. I'm not sure what impact this ordering may have (or be having) on the preview vs printed colors that we see.

dts350z commented 2 months ago

Sharing the python app to generate a color layer only version of images. FYI this was written with lots of AI help.

colorlayer 3.0 - release.zip

MadMax389 commented 2 months ago

Sharing the python app to generate a color layer only version of images. FYI this was written with lots of AI help.

colorlayer 3.0 - release.zip

Lol, I'm sure the code, math and color theory knowledge are well beyond my abilities.

dts350z commented 2 months ago

Sharing the python app to generate a color layer only version of images. FYI this was written with lots of AI help. colorlayer 3.0 - release.zip

Lol, I'm sure the code, math and color theory knowledge are well beyond my abilities.

Well it can also be done in Photoshop or Affinity, but yeah I learned a ton working on this.

ibgregr commented 2 months ago

For instance, here's our test image with all the luminance information removed:

test-color-layer

I would propose that that version be used as the input for the color layers, and the original be used for texture. Replacing the Gray pixels, in the color only version, is one thing that needs some work in my python app.

I printed the test picture above again after removing the luminance. It is the best print yet. Can you explain the different settings and the sliders and how/when to use them?

Also, can you post the tool for that tries to determine which filaments to use as well?

Great work!!

dts350z commented 2 months ago

OK, sure. In the Colorlayer Python app, by default it just replaces all the luminance values with 50% in LAB Color space.

The Lightness Threshold and Chroma Threshold sliders can be used to replace colors with either Low Lightness, or Low Chroma values with "white" (or black, if the black option is checked). Low Lightness might be "These colors are really dark, so just print them with black/max layers of gray or whatever. Low Chroma might be These colors are really white (or black for night scenes).

Once you have those where you want them, the rest of the sliders are to boost the color and/or change the Hue, to better match the original, or make up for the colors getting washed out in printing below the texture layer.

The Preview checkbox gives you a view of what the final print would look like, and "Display Original" lets you compare to the original image.

Which filaments to use (multi AMS) to follow later today.

MadMax389 commented 2 months ago

Excuse my glaring ignorance, but how do I run the script? I've installed python but can't seem to get it to run properly from a command line.

Thanks in advance.

dts350z commented 2 months ago

python name-of-script.py

Beyond that, please post error message. Probably additional python packages need to be installed (via pip or conda, depending on your python install).

MadMax389 commented 2 months ago

Installed v3.12.4 from python.org for Win 64. I seem to be missing cv2 module. image

dts350z commented 2 months ago

pip install opencv-python

There will likely be more packages needed.

numpy (pip install numpy) tkinter (should be included with python) PIL (pip install pillow)

dts350z commented 2 months ago

Here is the python and workflow for filament selection. It may not be very applicable for only 1 AMS(?).

Filament Selection.pdf

ibgregr commented 2 months ago

Here is the python and workflow for filament selection. It may not be very applicable for only 1 AMS(?).

Filament Selection.pdf

I'm working through this process now to see what I get. I think this is just as applicable for 1 AMS as using CMYK may not always be optimal so this would get you the best 4 filaments to use.

dts350z commented 2 months ago

FYI I can run the "all bambu PLA Basic and Matte" filaments palette file, as long as I keep the layer count to 5 (can later be increased to 7, if you like, when filaments are reduced to 8). Layer count refers to -l PIXEstL command flag, not the number of layers defined in the palette file.

ibgregr commented 2 months ago

FYI I can run the "all bambu PLA Basic and Matte" filaments palette file, as long as I keep the layer count to 5 (can later be increased to 7, if you like, when filaments are reduced to 8). Layer count refers to -l PIXEstL command flag, not the number of layers defined in the palette file.

My current file has 24 filaments (Basic and Matte). I am using -l 5 as well and it detected 76380 colors. I has 20 threads running with 64GB ram so we will see how long this takes. I may see if it can handle -l 7 to see how much the results actually change (if it will run to completion that is).

dts350z commented 2 months ago

FYI I can run the "all bambu PLA Basic and Matte" filaments palette file, as long as I keep the layer count to 5 (can later be increased to 7, if you like, when filaments are reduced to 8). Layer count refers to -l PIXEstL command flag, not the number of layers defined in the palette file.

My current file has 24 filaments (Basic and Matte). I am using -l 5 as well and it detected 76380 colors. I has 20 threads running with 64GB ram so we will see how long this takes. I may see if it can handle -l 7 to see how much the results actually change (if it will run to completion that is).

You may not need -Y. Don't know if that will speed things up if you don't use it. I started using it because I kept getting java heap errors, when I tried -l 7. I couldn't get a clear answer, from various AIs, as to how the max heap size is calculated, and it may depend on your flavor/version of java, so I'm not sure if having more RAM will mean the heap size is larger. Anyway if you get the heap size error you can try adding the -Xmx24432m or pick your own value. I ran a small java program to spit out the heap size, and it was Maximum Heap Memory: 8144.0 MB, on my 32GB RAM machine.

ibgregr commented 2 months ago

FYI I can run the "all bambu PLA Basic and Matte" filaments palette file, as long as I keep the layer count to 5 (can later be increased to 7, if you like, when filaments are reduced to 8). Layer count refers to -l PIXEstL command flag, not the number of layers defined in the palette file.

My current file has 24 filaments (Basic and Matte). I am using -l 5 as well and it detected 76380 colors. I has 20 threads running with 64GB ram so we will see how long this takes. I may see if it can handle -l 7 to see how much the results actually change (if it will run to completion that is).

You may not need -Y. Don't know if that will speed things up if you don't use it. I started using it because I kept getting java heap errors, when I tried -l 7. I couldn't get a clear answer, from various AIs, as to how the max heap size is calculated, and it may depend on your flavor/version of java, so I'm not sure if having more RAM will mean the heap size is larger. Anyway if you get the heap size error you can try adding the -Xmx24432m or pick your own value. I ran a small java program to spit out the heap size, and it was Maximum Heap Memory: 8144.0 MB, on my 32GB RAM machine.

Well crap...forgot to change my timeout values and had them set to 15 minutes. :( I have bumped them up to an hour to be safe so it's off to the races again. I also verified that my JVM is 64bit and have passed -Xmx32G for now. I've also left the -Y for now as I think that should really have negligible impact...especially on NVME SSD.

ibgregr commented 2 months ago

I actually misstated the number of filaments I'm running with. It's actually 32 and not 24:

  "name": "White-Basic",
  "name": "Black-Basic",
  "name":"Magenta-Basic",
  "name":"Cyan-Basic",
  "name":"Yellow-Basic",
  "name":"Beige-Basic",
  "name":"Red-Basic",
  "name":"Bambu_Green-Basic",
  "name":"Blue_Gray-Basic",
  "name":"Gray-Basic",
  "name":"Blue-Basic",
  "name":"Brown-Basic",
  "name":"Mistletoe_Green-Basic",
  "name":"Orange-Basic",
  "name":"Gold-Basic",
  "name":"Purple-Basic",
  "name":"Pink-Basic",
  "name":"Silver-Basic",
  "name":"Ice_Blue-Matte",
  "name":"Sakura_Pink-Matte",
  "name":"Latte_Brown-Matte",
  "name":"Dark_Brown-Matte",
  "name":"Scarlett_Red-Matte",
  "name":"Grass_Green-Matte",
  "name":"Mandarin_Orange-Matte",
  "name":"Lemon_Yellow-Matte",
  "name":"Desert_Tan-Matte",
  "name":"Marine_Blue-Matte",
  "name":"Dark_Red-Matte",
  "name":"Ash_Gray-Matte",
  "name":"Dark_Green-Matte",
  "name":"Dark_Blue-Matte",

And with -l 5 that generates the 76380 colors. I'll be curious as to how many -l 7 generates.

dts350z commented 2 months ago

This was a 32 filament run (-l 5):

"C:\Program Files\Zulu\zulu-17\bin\java.exe" -Xmx24432m -jar PIXEstL.jar -p "D:\Glenn\Downloads\PIXEstL-0.3.0\new-bambu-all-true.json" -w 100 -cW 0.4 2 -l 5 -f 0.24 -b 0.1 -Z false -Y -i "C:\Users\Glenn\OneDrive\Pictures\test.jpg" Palette generation... (122301 colors found) Calculating color distances with the image... Nb color used=11800 Generating previews... Generating STL files... Layer[0.0] :Black[PLA Basic], Dark Blue[PLA Matte], Dark Green[PLA Matte], Blue Gray[PLA Basic], Dark Brown[PLA Matte], Blue[PLA Basic], Misletoe Green[PLA Basic], Brown[PLA Basic], Gray[PLA Basic], Ash Gray[PLA Matte], Silver[PLA Basic], Bambu Green[PLA Basic], Purple[PLA Basic], Dark Red[PLA Matte], Marine Blue[PLA Matte], Red[PLA Basic], Grass Green[PLA Matte], Latte Brown[PLA Matte], Cyan[PLA Basic], Scarlet Red[PLA Matte], Ice Blue[PLA Matte], Gold[PLA Basic], Desert Tan[PLA Matte], Sakura Pink[PLA Matte], Magenta[PLA Basic], Pink[PLA Basic], Beige[PLA Basic], Lemon Yellow[PLA Matte], Mandarin Orange[PLA Matte], Yellow[PLA Basic], Orange[PLA Basic], White[PLA Basic]

GENERATION COMPLETE ! (554242 ms)

No timeouts.

MadMax389 commented 2 months ago

pip install opencv-python

There will likely be more packages needed.

numpy (pip install numpy) tkinter (should be included with python) PIL (pip install pillow)

Working now, thanks. Will try some of my test images when I get a chance.

ibgregr commented 2 months ago

This was a 32 filament run (-l 5):

"C:\Program Files\Zulu\zulu-17\bin\java.exe" -Xmx24432m -jar PIXEstL.jar -p "D:\Glenn\Downloads\PIXEstL-0.3.0\new-bambu-all-true.json" -w 100 -cW 0.4 2 -l 5 -f 0.24 -b 0.1 -Z false -Y -i "C:\Users\Glenn\OneDrive\Pictures\test.jpg" Palette generation... (122301 colors found) Calculating color distances with the image... Nb color used=11800 Generating previews... Generating STL files... Layer[0.0] :Black[PLA Basic], Dark Blue[PLA Matte], Dark Green[PLA Matte], Blue Gray[PLA Basic], Dark Brown[PLA Matte], Blue[PLA Basic], Misletoe Green[PLA Basic], Brown[PLA Basic], Gray[PLA Basic], Ash Gray[PLA Matte], Silver[PLA Basic], Bambu Green[PLA Basic], Purple[PLA Basic], Dark Red[PLA Matte], Marine Blue[PLA Matte], Red[PLA Basic], Grass Green[PLA Matte], Latte Brown[PLA Matte], Cyan[PLA Basic], Scarlet Red[PLA Matte], Ice Blue[PLA Matte], Gold[PLA Basic], Desert Tan[PLA Matte], Sakura Pink[PLA Matte], Magenta[PLA Basic], Pink[PLA Basic], Beige[PLA Basic], Lemon Yellow[PLA Matte], Mandarin Orange[PLA Matte], Yellow[PLA Basic], Orange[PLA Basic], White[PLA Basic]

GENERATION COMPLETE ! (554242 ms)

No timeouts.

Ok..so here's my brain getting thrown into a tizzy again. I'm sitting here thinking through the logic/workflow for the filament selection. I'm thinking there's a flaw in this method but it could just be my ignorance and/or overthinking. So we run through with ALL of the "in stock" filaments set to true and get a list in the order of use. Then we go back and end up with 8 of them. The issue I see is that even a filament at the low end of the list could be critical in the generation of a color used throughout the image as part of the layering. But this current workflow would not really take that into account...or at least in my mind it would not. Am I not thinking about this right?

dts350z commented 2 months ago

I don't disagree. It's the K-means clustering issue all over again. You get what you need for "Most" of the image, but then there's the little patch of pink or red or whatever, that is also needed.

I think we would need to instrument PIXEstL more to dig deeper, or the resultant stl files to be more closely examined (stats by layers?). But my hope is this is an improvement over just guessing. Please let us know what happens with real prints. I've had good results.

I'm hoping "a filament at the low end of the list could be critical in the generation of a color used throughout" would mean that it also had a lot of points, so wouldn't be at the end of the list...

ibgregr commented 2 months ago

I don't disagree. It's the K-means clustering issue all over again. You get what you need for "Most" of the image, but then there's the little patch of pink or red or whatever, that is also needed.

I think we would need to instrument PIXEstL more to dig deeper, or the resultant stl files to be more closely examined (stats by layers?). But my hope is this is an improvement over just guessing. Please let us know what happens with real prints. I've had good results.

I'm hoping "a filament at the low end of the list could be critical in the generation of a color used throughout" would mean that it also had a lot of points, so wouldn't be at the end of the list...

I will let you know. I hope to be able to do a test print later tonight or at least tomorrow using the colors "recommended" by the workflow.

dts350z commented 2 months ago

I have something working here that measures the "variance" of the points in each model. It should be a measure of how spread out the points are, vs. clustered together. This is to address you concern that a low point count filament would be used in lots of colors (lots of locations in the model).

Sorting by total variance (most spread out to least) does give a different selection/order of filaments:

layer-Yellow[PLA Basic].stl, Total Variance: 2.227245632781298 layer-Black[PLA Basic].stl, Total Variance: 2.191031058364791 layer-White[PLA Basic].stl, Total Variance: 2.11349654443131 layer-Orange[PLA Basic].stl, Total Variance: 2.0467176006078045 layer-Beige[PLA Basic].stl, Total Variance: 2.032578702840013 ayer-Gold[PLA Basic].stl, Total Variance: 1.908477342658345 layer-Sakura Pink[PLA Matte].stl, Total Variance: 1.8661138576968204 layer-Ice Blue[PLA Matte].stl, Total Variance: 1.6781400281821526 layer-Bambu Green[PLA Basic].stl, Total Variance: 1.675766108652483 layer-Purple[PLA Basic].stl, Total Variance: 1.6456236740814292 layer-Magenta[PLA Basic].stl, Total Variance: 1.6348585603812216 layer-Cyan[PLA Basic].stl, Total Variance: 1.6199489923450274 layer-Grass Green[PLA Matte].stl, Total Variance: 1.6104828210143778 layer-Brown[PLA Basic].stl, Total Variance: 1.5465478371629708 layer-Gray[PLA Basic].stl, Total Variance: 1.5444784490605987 layer-Misletoe Green[PLA Basic].stl, Total Variance: 1.4895372284506256 layer-Red[PLA Basic].stl, Total Variance: 1.4641360648269712 layer-Silver[PLA Basic].stl, Total Variance: 1.4532448652222552 layer-Blue[PLA Basic].stl, Total Variance: 1.2672527511629563

But it seems to me maybe we want a "score" which would be the product of variance and point count. If I do that, I get basically the "original" selection/order.

layer-White[PLA Basic].stl, Total Variance: 2.1135, Points: 93371, Product: 197339.2859 layer-Ice Blue[PLA Matte].stl, Total Variance: 1.6781, Points: 51264, Product: 86028.1704 layer-Sakura Pink[PLA Matte].stl, Total Variance: 1.8661, Points: 44784, Product: 83572.0430 layer-Beige[PLA Basic].stl, Total Variance: 2.0326, Points: 35374, Product: 71900.4390 layer-Gold[PLA Basic].stl, Total Variance: 1.9085, Points: 36877, Product: 70378.9190 layer-Yellow[PLA Basic].stl, Total Variance: 2.2272, Points: 29823, Product: 66423.1465 layer-Gray[PLA Basic].stl, Total Variance: 1.5445, Points: 40524, Product: 62588.4447 layer-Bambu Green[PLA Basic].stl, Total Variance: 1.6758, Points: 36288, Product: 60810.2006 layer-Misletoe Green[PLA Basic].stl, Total Variance: 1.4895, Points: 34957, Product: 52069.7529 layer-Orange[PLA Basic].stl, Total Variance: 2.0467, Points: 17762, Product: 36353.7980 layer-Brown[PLA Basic].stl, Total Variance: 1.5465, Points: 16692, Product: 25814.9765 layer-Grass Green[PLA Matte].stl, Total Variance: 1.6105, Points: 14671, Product: 23627.3935 layer-Cyan[PLA Basic].stl, Total Variance: 1.6199, Points: 13894, Product: 22507.5713 layer-Magenta[PLA Basic].stl, Total Variance: 1.6349, Points: 8439, Product: 13796.5714 layer-Silver[PLA Basic].stl, Total Variance: 1.4532, Points: 8420, Product: 12236.3218 layer-Purple[PLA Basic].stl, Total Variance: 1.6456, Points: 6853, Product: 11277.4590 layer-Black[PLA Basic].stl, Total Variance: 2.1910, Points: 4244, Product: 9298.7358 layer-Blue[PLA Basic].stl, Total Variance: 1.2673, Points: 4942, Product: 6262.7631 layer-Red[PLA Basic].stl, Total Variance: 1.4641, Points: 3418, Product: 5004.4171

Thoughts?

ibgregr commented 2 months ago

I have something working here that measures the "variance" of the points in each model. It should be a measure of how spread out the points are, vs. clustered together. This is to address you concern that a low point count filament would be used in lots of colors (lots of locations in the model).

Sorting by total variance (most spread out to least) does give a different selection/order of filaments:

layer-Yellow[PLA Basic].stl, Total Variance: 2.227245632781298 layer-Black[PLA Basic].stl, Total Variance: 2.191031058364791 layer-White[PLA Basic].stl, Total Variance: 2.11349654443131 layer-Orange[PLA Basic].stl, Total Variance: 2.0467176006078045 layer-Beige[PLA Basic].stl, Total Variance: 2.032578702840013 ayer-Gold[PLA Basic].stl, Total Variance: 1.908477342658345 layer-Sakura Pink[PLA Matte].stl, Total Variance: 1.8661138576968204 layer-Ice Blue[PLA Matte].stl, Total Variance: 1.6781400281821526 layer-Bambu Green[PLA Basic].stl, Total Variance: 1.675766108652483 layer-Purple[PLA Basic].stl, Total Variance: 1.6456236740814292 layer-Magenta[PLA Basic].stl, Total Variance: 1.6348585603812216 layer-Cyan[PLA Basic].stl, Total Variance: 1.6199489923450274 layer-Grass Green[PLA Matte].stl, Total Variance: 1.6104828210143778 layer-Brown[PLA Basic].stl, Total Variance: 1.5465478371629708 layer-Gray[PLA Basic].stl, Total Variance: 1.5444784490605987 layer-Misletoe Green[PLA Basic].stl, Total Variance: 1.4895372284506256 layer-Red[PLA Basic].stl, Total Variance: 1.4641360648269712 layer-Silver[PLA Basic].stl, Total Variance: 1.4532448652222552 layer-Blue[PLA Basic].stl, Total Variance: 1.2672527511629563

But it seems to me maybe we want a "score" which would be the product of variance and point count. If I do that, I get basically the "original" selection/order.

layer-White[PLA Basic].stl, Total Variance: 2.1135, Points: 93371, Product: 197339.2859 layer-Ice Blue[PLA Matte].stl, Total Variance: 1.6781, Points: 51264, Product: 86028.1704 layer-Sakura Pink[PLA Matte].stl, Total Variance: 1.8661, Points: 44784, Product: 83572.0430 layer-Beige[PLA Basic].stl, Total Variance: 2.0326, Points: 35374, Product: 71900.4390 layer-Gold[PLA Basic].stl, Total Variance: 1.9085, Points: 36877, Product: 70378.9190 layer-Yellow[PLA Basic].stl, Total Variance: 2.2272, Points: 29823, Product: 66423.1465 layer-Gray[PLA Basic].stl, Total Variance: 1.5445, Points: 40524, Product: 62588.4447 layer-Bambu Green[PLA Basic].stl, Total Variance: 1.6758, Points: 36288, Product: 60810.2006 layer-Misletoe Green[PLA Basic].stl, Total Variance: 1.4895, Points: 34957, Product: 52069.7529 layer-Orange[PLA Basic].stl, Total Variance: 2.0467, Points: 17762, Product: 36353.7980 layer-Brown[PLA Basic].stl, Total Variance: 1.5465, Points: 16692, Product: 25814.9765 layer-Grass Green[PLA Matte].stl, Total Variance: 1.6105, Points: 14671, Product: 23627.3935 layer-Cyan[PLA Basic].stl, Total Variance: 1.6199, Points: 13894, Product: 22507.5713 layer-Magenta[PLA Basic].stl, Total Variance: 1.6349, Points: 8439, Product: 13796.5714 layer-Silver[PLA Basic].stl, Total Variance: 1.4532, Points: 8420, Product: 12236.3218 layer-Purple[PLA Basic].stl, Total Variance: 1.6456, Points: 6853, Product: 11277.4590 layer-Black[PLA Basic].stl, Total Variance: 2.1910, Points: 4244, Product: 9298.7358 layer-Blue[PLA Basic].stl, Total Variance: 1.2673, Points: 4942, Product: 6262.7631 layer-Red[PLA Basic].stl, Total Variance: 1.4641, Points: 3418, Product: 5004.4171

Thoughts?

That's interesting. But as I sit here and continue to try to think through this I'm not sure the approach we currently have is the right one. We do not have the compute power to do what I think would yield the best results. That process would be first of all to define how many filament colors you want to use. In our current case 8. Then it would need to go through all the combinations of 8 filament colors and process the layers and determine color matches. It would then come up with a score for each combination and the set of 8 with the highest score would be the 8 filaments that gave the best color matching and in theory would yield the best print results. But, when using a palette of 32 filaments there are 2,629,575 unique combinations and even if it only took 5 minutes per combination to process it would be roughly 25 years. LOL

So, at the moment, the workflow you provided, along with some user perception and desires, looks to be the best solution for now.

ibgregr commented 2 months ago

I don't disagree. It's the K-means clustering issue all over again. You get what you need for "Most" of the image, but then there's the little patch of pink or red or whatever, that is also needed. I think we would need to instrument PIXEstL more to dig deeper, or the resultant stl files to be more closely examined (stats by layers?). But my hope is this is an improvement over just guessing. Please let us know what happens with real prints. I've had good results. I'm hoping "a filament at the low end of the list could be critical in the generation of a color used throughout" would mean that it also had a lot of points, so wouldn't be at the end of the list...

I will let you know. I hope to be able to do a test print later tonight or at least tomorrow using the colors "recommended" by the workflow.

Well...I did a print overnight using the filament selection that the workflow provided. It resulted in a somewhat decent representation but it was a bit washed out. Or maybe it's because all of my other attempts are oversaturated and I've grown accustomed to it. It just continues to confirm that this is not a science and we are nowhere near plug and play! LOL

dts350z commented 2 months ago

Was your overnight print using a color layer with the luminance removed, or with the original image? Removing the luminance info from the color layer should result in fewer colors to match and fewer layers needed (which should lead to more accurate colors). But yeah, still a little more art than science.

ibgregr commented 2 months ago

Was your overnight print using a color layer with the luminance removed, or with the original image? Removing the luminance info from the color layer should result in fewer colors to match and fewer layers needed (which should lead to more accurate colors). But yeah, still a little more art than science.

Yes...it was processed with the colorlayer python script first to remove luminance. The colors looked ok but there was just this washed out look. Not vibrant. If it were my first print I might have said "cool". But since I've seen better results with other settings, etc that was not the case. The print I did before this one last night was with luminance removed but using the same colors I had been using previously and it turned out really nice. So removing the luminance will become part of my normal workflow.

dts350z commented 2 months ago

In the python app, the 3rd and 4th sliders (from the top) allow for "boosting" the color.

Also, on my comments about fewer colors fewer layers; this data is interesting (This is your lighthouse test image):

color layers comparison chart

So I would say use 5 layers (if fewer = more accurate / less washed out) or 6 if you want to maximize colors used, but not 7.

ibgregr commented 2 months ago

In the python app, the 3rd and 4th sliders (from the top) allow for "boosting" the color.

Also, on my comments about fewer colors fewer layers; this data is interesting (This is your lighthouse test image):

color layers comparison chart

So I would say use 5 layers (if fewer = more accurate / less washed out) or 6 if you want to maximize colors used, but not 7.

In the python app, the 3rd and 4th sliders (from the top) allow for "boosting" the color.

Also, on my comments about fewer colors fewer layers; this data is interesting (This is your lighthouse test image):

color layers comparison chart

So I would say use 5 layers (if fewer = more accurate / less washed out) or 6 if you want to maximize colors used, but not 7.

Yes..I saw the 'boost' sliders and I believe I did boost the saturation. But...this data above IS interesting. I was actually playing around with layers last night looking at the difference in the previews that were generated. I was not doing that testing scientifically like you did but I did notice that fewer layers could still generate a very acceptable preview. Just do not know how that translates into an actual print. I may try a 6 layer print tonight. But, I'm thinking part of the "washed out" part is just from the colors it chose for me to use:

White-Basic Beige-Basic Bambu_Green-Basic Pink-Basic Ice_Blue-Matte Sakura_Pink-Matte Lemon_Yellow-Matte Scarlett_Red-Matte

With the exception of the Scarlett Red, Pink, and Bambu Green, the colors are fairly "pale". So I'm not sure I will see the "pop" I do when using more vibrant filament colors. The area in that photo that give alot of problems is the Yellow flower tips and the butterflies. I get everything else looking good but those 2 things are not. Using the colors below after removing the luminance has given me what I think is the best print result so far:

White-Basic Cyan-Basic Yellow-Basic Beige-Basic Mistletoe_Green-Basic Ice_Blue-Matte Scarlett_Red-Matte

Those are what I've been using for a while now for printing this specific photo while tweaking the palette and it's only 7 colors. I found I didn't really need Magenta for this print. I guess the Cyan must play a large part in making some of the colors pop more as that's really the only color that is "different".

So basically I'm back to shrugging my shoulders and saying "I dunno". LOL

dts350z commented 2 months ago

You must have calculated this to reach your time estimate, but there are "only" 10,518,300 possible combinations of 8 filaments out of the 32 Bambu Basic and Matte filaments you have characterized ;0)

I looked at some different python color libs yesterday, but nothing useful has popped out yet. I may go back to trying to render the stl models for preview before printing, this time in Blender. Blender has a pretty mountainous learning curve but is scriptable with python (and free). How well it mimics the real world results is yet to be seen.