gaugo87 / PIXEstL

A program for creating color lithophanies and pixel images
MIT License
60 stars 10 forks source link

Please explain how the layer numbers work #12

Open ibgregr opened 6 months ago

ibgregr commented 6 months ago

@gaugo87 - Can you please go into some detail about how the layers in the JSON file work. My understanding was that they corresponded to the color layer being printed but that does not seem to be the case. For example, I set a Gray filament to only have layers 1 and 2 defined in the JSON file.

"#8E9089": { "name": "Gray[PLA Basic]", "active": true, "layers": { "2": { "hexcode": "#938C79" }, "1": { "hexcode": "#BAB8A3" } } },

But when I generate the STLs and load into Bambu Studio and slice, I see that Gray being used on color layers 6 and 7 (actual layers 7 and 8) when I expected to see those on color layers 1 and 2 (actual layers 2 and 3). I'm sure I must be over simplifying things and would really like to have a better understanding for future tuning of the palette. Also, what causes the preview that is generated to be so far off from the printed results in some cases?

Any detail with some examples would be great!

Thanks! Greg

dts350z commented 6 months ago

This is how I "think" it works, for what it's worth.

The "Layers" in the palette file don't correspond to layers in the STL, They just mean to give values to the program, for how many layers of filament define a given color.

You have defined two colors of gray. One with one layer thickness and one with two layer thicknesses. The program can then use those two colors, by themselves (with white on all the other layers) or in combinations with other defined colors, to make new colors. e.g. add some gray to yellow to make a "darker", in luminosity yellow.

As such I think there could one pixel, on any given layer in the Gray stl, with zero or one more pixels in the layer below (and possibly other colors in layers above or below, in the color's corresponding stl file.

The original palette file (version 3.0 of the program) has:

"#000000": { "name": "Black[PLA Basic]", "active": true, "layers": { "5": { "H": 0, "S": 0, "L": 10 }, "4": { "H": 0, "S": 0, "L": 20 }} },

My interpretation of that is it is defining two black colors. One that is 4 layers thick and one that is 5 layers thick.

In the Black stl file, any pixel could have no black, or 4 layers of black, or 5 layers of black. They could be in any layer, but always either 4 or 5 layers deep.

I hope that is both clear and correct.

I am stuck in the same place as you in that the preview doesn't tell you much about how the final print will come out, and it is frustrating to spend 4-6 hours (printing only the color layers and base) only to find out it's wrong. I have searched for a "color lithophane viewer", without much luck. Might be doable in Blender, but that is such a beast of a learning curve if you're not a blender user, someone would need to right a tutorial or something. I plan to try with python (see below query result from Microsoft copilot):

You can use the `vedo` library in Python to load, color, and render different STL files together. Here's a basic example of how you can do this:

```python
from vedo import load, show, Light

# Load the STL files and apply colors
stl1 = load('path_to_your_first_stl_file.stl').color('red')
stl2 = load('path_to_your_second_stl_file.stl').color('blue')
# ... load more STL files as needed

# Create a light source
light = Light('white', position=(0, 0, 0))

# Show the STL files with the light source
show(stl1, stl2, light)

In this example, replace 'path_to_your_first_stl_file.stl' and 'path_to_your_second_stl_file.stl' with the actual paths to your STL files¹. The color() function is used to apply colors to the STL files¹. You can replace 'red' and 'blue' with any color you like¹.

The Light class is used to create a light source². You can adjust the position of the light source by changing the values in the position parameter².

Finally, the show() function is used to display the STL files and the light source¹. You can add more STL files to the show() function as needed¹.

Please note that the actual appearance of the STL files may vary depending on your computer's graphics settings¹. Always remember to interpret the results carefully and consider the context of your data¹. For more information and to explore further features of vedo, please refer to the official documentation².

Source: Conversation with Bing, 4/16/2024 (1) Visualizing Multiple 3D Objects with vedo in Medical Imaging. https://pycad.co/visualizing-multiple-3d-objects-with-vedo-in-medical-imaging/. (2) vedo API documentation - EMBL. https://vedo.embl.es/autodocs/content/vedo/vedo.html. (3) Visualizing Multiple 3D Objects with vedo in Medical Imaging. https://bing.com/search?q=vedo+display+multiple+stl+files+different+colors+light+source. (4) Interactive 3D Visualization with vedo and Streamlit - PYCAD. https://pycad.co/interactive-3d-visualization-with-vedo-and-streamlit/. (5) undefined. https://github.com/marcomusy/vedo.git. (6) undefined. https://vedo.embl.es/examples/data/panther.stl.gz.


So far I have just got to the point where I've loaded one layer, but I plan to keep trying.

If we had a viewer I think we could understand PIXEstL better and therefore make palette adjustments faster, vs. actually printing each experiment.
ibgregr commented 6 months ago

That is an EXCELLENT explanation on the layers and makes total sense now that I read it. I was just being to literal in my digestion of the json file. :) I knew it was using layers to get the different shades, etc but for some reason was being thick headed about the layer number in the JSON file being the layer number in the STL.

I will take a look at the vedo library and see what it can do. I still do not quite understand why the preview does not yield the results we expect. If it's using the color codes from the JSON file then it should be able to display a fairly close representation of what the printed results would be. And if it did that then it would just be a matter of having great calibrations of your filament library. If the java code was documented with comments then I would probably tinker around with it but without that I would just be stabbing in the dark. :) I totally agree...doing 6 hour reprints trying to tweak the colors gets old fast!

Thanks again for the layer explanation. That helps ALOT. I'm finding that the darker filament colors are too "strong" and I will end up just using 1 or 2 layers for those. I have a couple I did not even put in the json file since anything more than one layer was useless. I will now go back and add those as well with just 1 or 2 layers.

Greg

ibgregr commented 6 months ago

Ok..so I just played around with the vedo library. I can get it to load the files but it ends up looking just like it does in Bambu Studio after.

#!/usr/bin/env python3

from vedo import show, load, colors

# Paths to your STL files
file_paths = ["beige.stl","cyan.stl","gray.stl","iceblue.stl","magenta.stl","red.stl","white.stl","yellow.stl","plate.stl"]

# Define colors for each stl file
#I used the default codes for the Bambu Filaments...but not sure if that's what should be used since the calibrated
#values can be totally different.  
colors_list = ["#F7E6DE", "#008606", "#8E9089", "#A3D8E1", "#EC008C", "#C12E1F", "#FFFFFF", "#F4EE2A", "#FFFFFF"] 

# Load each STL file and assign a color
objects = []
for file_path, color in zip(file_paths, colors_list):
    obj = load(file_path)
    obj.color(color)
    objects.append(obj)

# Display all the objects together
show(objects)

I noticed your script is using some totally different calls which might display the results differently but it did not work for me as is. I came across this other example while trying to troubleshoot what you provided. I just wanted to start by getting the library to display something recognizable...which this did. I will continue to look into this a bit more to see what I can find.

dts350z commented 6 months ago

Re: vedo, and any other lithophane preview software, we want to look at the model with a light source shining through it, not reflecting off it.

It doesn't surprise me that the AI provided syntax doesn't work. That has been my experience with both Microsoft and Google LLM AI's that are supposed to be able to write code. They often mix syntax from different languages, and otherwise give code suggestions that don't compile or run. Microsoft seems to be getting closer, but still...

But please post any progress!

dts350z commented 6 months ago

This is where I am with vedo:

from vedo import *

# Paths to your STL files
file_paths = ["./layer-Beige.stl","./layer-Cyan.stl"]

# Define colors for each stl file
colors_list = ["#F7E6DE", "#008606"]

# Create a point, at the light source position
p1 = Point([75,75,-50], c='b')

# Create a light source, set the color and position
light = Light(p1, c='w')

# Load each STL file and assign a color
objects = []
for file_path, color in zip(file_paths, colors_list):
    obj = load(file_path)
    obj.color(color)
    objects.append(obj)

# Show the STL files with the light source
show (objects, light, p1)

If you rotate around that you will see the light source lighting up the back side of the model, and it seems like some color is showing through on the front side. I'm seeing a green color, rather than cyan, however.

Some progress!

Maybe we should set the stl color to the color of one layer, vs. the color reported by the manufacturer? Will try that next.

ibgregr commented 6 months ago

Yeah..I've tried about 17 different gyrations of different approaches with the help of ChatGPT. Nothing really useful yet. I've been loading everything except the texture stl file. In some cases I definitely saw color. And in once case it had to be doing som blending as I saw green when that was not one of the colors in the list. So, I still have hopes with this but I'm not sure it's blending the layers like we expect. But then again, I'm not 100% sure how PIXEstL does this either since in the slicer we assign a filament color...but PIXEstL used calibration values. I'm sure I'm being thick headed again trying to wrap my mind around all of this LOL.

I will pick up on this again tomorrow as time permits (between work stuff).

dts350z commented 6 months ago

There is a transparency setting in vedo. I'm just not sure how to apply it yet...

dts350z commented 6 months ago

Another Python Viewer to try: Napari. Maybe with additive blending (compare all blending modes).

Oh, you'll need to install the stl exporter plugin.

ibgregr commented 6 months ago

Earlier when you said you were seeing green instead of cyan was due to a typo in my colors_list. I had Cyan as "#008606" and it should have been "#0086D6". That's what I get for typing instead of using cut and paste.

I'm taking a look at napari now. I have it loading the files but when it opens the viewer I'm not seeing what I expect. But that is just my first stab. I did not require an "stl exporter". Or maybe I do and that's why I'm not getting expected results. Here is my current script:

#!/usr/bin/env python3

import trimesh
import numpy as np
import napari

# Define file paths
file_paths = ["beige.stl", "cyan.stl", "gray.stl", "iceblue.stl", "magenta.stl", "red.stl", "white.stl", "yellow.stl", "plate.stl"]

# Define colors for layers 2 through 8 for each STL file
layer_colors = ["#F7E6DE", "#0086D6", "#8E9089", "#A3D8E1", "#EC008C", "#C12E1F", "#FFFFFF", "#F4EE2A", "#FFFFFF"]

# Create a viewer
viewer = napari.Viewer()

# Load each STL file and add to the viewer with specified color
for file_path, color in zip(file_paths, layer_colors):
    print("Loading " + file_path + " with color " + color)
    # Load STL file
    mesh = trimesh.load_mesh(file_path)
    # Convert mesh data to voxel data
    print("Converting " + file_path + " to voxel data")
    voxel_data = mesh.voxelized(2)  # Adjust voxel size as needed
    # Add voxel data to viewer as an image
    print("Adding " + file_path + " voxel data to viewer")
    viewer.add_image(voxel_data.matrix, name=file_path, colormap=color, blending='translucent')

# Show the viewer
print("showing viewer")
napari.run()

This was generated by AI so not sure if it is even trying to do what I want. LOL. I haven't read any of the napari info yet. But I will dig in again tomorrow.

I'm still not sure if this is going to give us what we are looking for. We are providing the default Bambu color values just like we basically do in Bambu Studio. Using those values cannot give us what the printed result will look like as the calibrated values are not being used at all. We really need a way to do the actual layering like the printer does and I'm not sure any of the STL viewers will be able to do that.

ibgregr commented 6 months ago

Also, if you would like to take this conversation to email so that we are not flooding this project (at least until we have something that works) feel free to send me a message at: i b g r e g r at g m a i l dot c o m

If not..we will carry on here.

ibgregr commented 6 months ago

THe napari code I listed above was creating the voxelized data at a level of 2. That does not have hardly any detail in the results. The lower you make that number the longer it takes to load/convert the data. I tried .3 but with the size of my stl files I gave up after about 10 minutes. I'm actually using the files from a print I did today to try to compare those results. I renamed the layer files to just the color for easier handling. So I'm currently looking at the vedo code again.

Using this code:

#!/usr/bin/env python3

from vedo import *

# Define file paths
file_paths = ["plate.stl", "beige.stl", "cyan.stl", "gray.stl", "iceblue.stl", "magenta.stl", "red.stl", "white.stl", "yellow.stl"]
#Define colors
layer_colors = ["#FFFFFF", "#F7E6DE", "#0086D6", "#8E9089", "#A3D8E1", "#EC008C", "#C12E1F", "#FFFFFF", "#F4EE2A"]

# Create a point for the light source position behind the objects
p1 = Point([75, 75, -50], c='white')
# Create a light source behind the objects
light_behind = Light(p1, c='white')

# Create a point for the light source position in front of the objects
p2 = Point([75, 75, 150], c='white')
# Create a light source in front of the objects
light_front = Light(p2, c='white')

# Load each STL file and assign a color
objects = []
for file_path, color in zip(file_paths, layer_colors):
    obj = load(file_path)
    obj.color(color, alpha=0.99)
    objects.append(obj)

# Show the STL files with both light sources
show(objects, light_behind, p1, light_front, p2)

I get a window popped up with a rendering. If you click on that window and then press H you will get a list of keys that can manipulate things. Still not giving me what I think we need...but at least it's doing something. I wonder if the order of loading impacts anything. I say that because when I was tinkering with napari it definitely made a difference.

Ok...that's it for tonight. Sorry for being so long winded and all over the place.

ibgregr commented 6 months ago

Good morning. I've been thinking about this all morning. And I have come to the conclusion that rendering the STL files is not going to give us any better indication of the printed results. It will be no different than what the slicer shows us since we just give each STL a color code and then it bases the rendering on that code. The preview that the app creates is going to be the closest to reality. If the calibrations are accurate then the preview should be fairly close to the printed results. I say that because the preview is generated based on the layer color values in the JSON file. So when it's computing the "distance" between colors, that can be off if the calibrations are not accurate. So in my opinion, taking the printed calibrations and sampling those is what's important here. In my case, and I believe you stated the same, my monitor is definitely not calibrated so what I end up with could be way off. So I'm going to put some more effort into my calibration values at this point as I think that is what will give the best results.

dts350z commented 6 months ago

I think there is something to be learned from a preview, if only, yeah, it comes down to the calibration.

By the way, I'm OK to switch to email, just don't know how to connect short of posting email address here (why doesn't github have private messages?).

This is where I am with napari (cries out for some sort of object based loop):


import napari
from napari.utils.colormaps import Colormap
from skimage.io import imread
from stl import mesh
import numpy as np

# Define your colormaps (White [transparent] to Manufacture's reported color) 
#beige_colormap = Colormap(['#FFFFFF', '#E7CEB5'], name='beige')
#cyan_colormap = Colormap(['#FFFFFF', '#0086D6'], name='cyan') 
#grey_colormap = Colormap(['#FFFFFF', '#3E3A36'], name='grey') 
#magenta_colormap = Colormap(['#FFFFFF', '#EC008C'], name='magenta') 
#ice_blue_colormap = Colormap(['#FFFFFF', '#8BD5EE'], name='ice-blue') 
#pink_colormap = Colormap(['#FFFFFF', '#E4BDD0'], name='pink') 
#white_colormap = Colormap(['#FFFFFF', '#FFFFFF'], name='white') 
#yellow_colormap = Colormap(['#FFFFFF', '#FCE300'], name='yellow') 

# Define your colormaps (White [transparent] to single layer color from calibration tiles) 
beige_colormap = Colormap(['#FFFFFF', '#8e959c'], name='beige') #207, 9, 61
cyan_colormap = Colormap(['#FFFFFF', '#136ec2'], name='cyan') # 209, 90, 76
grey_colormap = Colormap(['#FFFFFF', '#a8a39e'], name='grey') # 30, 6, 66
magenta_colormap = Colormap(['#FFFFFF', '#c22772'], name='magenta') # 331, 80, 76
ice_blue_colormap = Colormap(['#FFFFFF', '#8db1c7'], name='ice-blue') # 203, 29, 78
pink_colormap = Colormap(['#FFFFFF', '#beb6cc'], name='pink') # 202, 11, 80
white_colormap = Colormap(['#FFFFFF', '#eba9a9'], name='white') # 0, 28, 92
yellow_colormap = Colormap(['#FFFFFF', '#b5cc83'], name='yellow') # 79, 36, 80

# Create a viewer
v = napari.Viewer()

# Load the STL files and add the vectors to the plot
beige_mesh = mesh.Mesh.from_file('layer-Beige.stl')
cyan_mesh = mesh.Mesh.from_file('layer-Cyan.stl')
grey_mesh = mesh.Mesh.from_file('layer-Grey.stl')
magenta_mesh = mesh.Mesh.from_file('layer-Magenta.stl')
ice_blue_mesh = mesh.Mesh.from_file('layer-Matte-Ice-Blue.stl')
pink_mesh = mesh.Mesh.from_file('layer-Matte-Sakura-Pink.stl')
white_mesh = mesh.Mesh.from_file('layer-White.stl')
yellow_mesh = mesh.Mesh.from_file('layer-Yellow.stl')

# Convert the mesh data to the format expected by napari
vertices = beige_mesh.vectors.reshape(-1, 3)
faces = np.arange(len(vertices)).reshape(-1, 3)
values = np.ones(len(vertices))

# Add the image data as a layer to the viewer
beige_layer = v.add_surface((vertices, faces, values), name='beige_mesh', colormap=beige_colormap, blending='translucent',shading='none')

# Convert the mesh data to the format expected by napari
vertices = cyan_mesh.vectors.reshape(-1, 3)
faces = np.arange(len(vertices)).reshape(-1, 3)
values = np.ones(len(vertices))

# Add the image data as a layer to the viewer
cyan_layer = v.add_surface((vertices, faces, values), name='cyan_mesh', colormap=cyan_colormap, blending='translucent',shading='none')

# Convert the mesh data to the format expected by napari
vertices = cyan_mesh.vectors.reshape(-1, 3)
faces = np.arange(len(vertices)).reshape(-1, 3)
values = np.ones(len(vertices))

# Add the image data as a layer to the viewer
grey_layer = v.add_surface((vertices, faces, values), name='grey_mesh', colormap=grey_colormap, blending='translucent',shading='none')

# Convert the mesh data to the format expected by napari
vertices = magenta_mesh.vectors.reshape(-1, 3)
faces = np.arange(len(vertices)).reshape(-1, 3)
values = np.ones(len(vertices))

# Add the image data as a layer to the viewer
magenta_layer = v.add_surface((vertices, faces, values), name='magenta_mesh', colormap=magenta_colormap, blending='translucent',shading='none')

# Convert the mesh data to the format expected by napari
vertices = ice_blue_mesh.vectors.reshape(-1, 3)
faces = np.arange(len(vertices)).reshape(-1, 3)
values = np.ones(len(vertices))

# Add the image data as a layer to the viewer
ice_blue_layer = v.add_surface((vertices, faces, values), name='ice_blue_mesh', colormap=ice_blue_colormap, blending='translucent',shading='none')

# Convert the mesh data to the format expected by napari
vertices = pink_mesh.vectors.reshape(-1, 3)
faces = np.arange(len(vertices)).reshape(-1, 3)
values = np.ones(len(vertices))

# Add the image data as a layer to the viewer
pink_layer = v.add_surface((vertices, faces, values), name='pink_mesh', colormap=pink_colormap, blending='translucent',shading='none')

# Convert the mesh data to the format expected by napari
vertices = white_mesh.vectors.reshape(-1, 3)
faces = np.arange(len(vertices)).reshape(-1, 3)
values = np.ones(len(vertices))

# Add the image data as a layer to the viewer
white_layer = v.add_surface((vertices, faces, values), name='white_mesh', colormap=white_colormap, blending='translucent',shading='none')

# Convert the mesh data to the format expected by napari
vertices = yellow_mesh.vectors.reshape(-1, 3)
faces = np.arange(len(vertices)).reshape(-1, 3)
values = np.ones(len(vertices))

# Add the image data as a layer to the viewer
yellow_layer = v.add_surface((vertices, faces, values), name='yellow_mesh', colormap=yellow_colormap, blending='translucent',shading='none')

# Run the viewer
napari.run()

a few points:

I think the colormaps need to be white to "color", vs. Black to color, for transparency.

I think the "color" needs to be the color of the 1 layer calibration tile (and this is probably where everything goes wrong in our color lithophanes. We may need to "lock" the hue values and just use the calibrations for S and L.

given this image:

Color Cal 144mm x108mm Colors based on filament Hex code

The above code gives:

image

which is the "front" side and:

image

for the back side. I had it in my head that one side would be the "preview" of light shining through the model and the other side would be similar to what we see in the slicer (light reflecting off the model), however I don't think that's what the code is giving us, based on the results.

ibgregr commented 6 months ago

First...my email is: i b g r e g r at g m a i l dot c o m :)

I just ordered Nix Mini 3 off of Amazon and now I'm reprinting my color swatches so that they are larger. I will then hopefully be able to use the Nix to scan the colors. I'm printing my swatches on a 144x108 base so it will fit in a standard bambu frame and use the bambu LED. I should have the Nix this evening and will give it a try as soon as it's charged.

I think you are correct on locking the hue...at least if there is any chance for these STL viewers to show us how the layered colors will look. My only concern is if the printed result does not follow that same pattern of 1 hue and different S and L values. If it doesn't then the preview will still not be right. And that's why I keep coming back to thinking the calibration HAS to be spot on. Once it is, in theory the preview would match what's printed...assuming your monitor is properly calibrated as well. I know my monitor is not...and that might be something else I look into.

For napari...and vedo as well...I still wonder how the mesh order affects it. It seems like it did last night when I would rearrange those. But then again I was trying so many things that I could be mistaken. I also agree that we are not seeing the blended result in these viewers. I think that would have to do pretty much the same thing PIXEstL does to generate the preview...which I think would bring us back to where we started.

gaugo87 commented 6 months ago

@gaugo87 - Can you please go into some detail about how the layers in the JSON file work. My understanding was that they corresponded to the color layer being printed but that does not seem to be the case. For example, I set a Gray filament to only have layers 1 and 2 defined in the JSON file.

That's exactly how @dts350z explained it.

My interpretation of that is it is defining two black colors. One that is 4 layers thick and one that is 5 layers thick.

In reality, there's not much difference in the rendering of the two blacks. It's more of a trick to tell the program from which brightness it should switch to black (rather than through a combination of CMY colors).

In the CIELab model, the hue "H" and saturation "S" don't really matter much when the brightness "L" is low. So, in the color distance calculation (to estimate which color is closest among those that can be recreated), it's mostly the L parameter that matters.

In other words, this trick is to tell the program to use black for all colors with L values close to 10 or 20.

@ibgregr and @dts350z , if you have any ideas/algorithms to improve color calculation, please feel free to share them. Unfortunately, I don't have the time right now to work on this project. I should have more time in the coming weeks.

gaugo87 commented 6 months ago

Just for your information, the color estimation is done using the method "getColor" in the class "ggo.pixestl.palette.ColorCombi".

public Color getColor(GenInstruction genInstruction)
{
  double c=0,m=0,y=0,k=0;
  for (ColorLayer lithoColorLayer : layers)
  {
      if (genInstruction.isDebug()) {
          System.out.print(lithoColorLayer.getHexCode() + "[" + lithoColorLayer.getLayer() + "]");
      }
      if (lithoColorLayer.getC()+lithoColorLayer.getM()+lithoColorLayer.getY() == 0
          && c+m+y==0) continue;
      c+=lithoColorLayer.getC();    
      m+=lithoColorLayer.getM();
      y+=lithoColorLayer.getY();
      k+=lithoColorLayer.getK();
  }
  Color color = ColorUtil.cmykToColor(c<1?c:1, m<1?m:1, y<1?y:1, k<1?k:1);
  if (genInstruction.isDebug()) {
      System.out.println("=" + ColorUtil.colorToHexCode(color));
  }
  return color;
}

As you can see, the calculation method is simple and effective. It mainly involves adding the CMYK components.

Even though it's effective, it can easily be improved (such as adding the TD component, for example).

If you have any ideas for improvement (for example, how to incorporate the TD component), I'm all ears.