Open kdschlosser opened 1 year ago
Hi @kdschlosser I think that most of the problems you are pointing at are not present any more on the latest version of the simulator.
lv.color_t methods not returning the types they are supposed to be.
Cannot reproduce on latest version.
Your (fixed) script prints:
<class 'lv_color32_t'> <--- correct, lv_color_t maps to lv_color32_t in the simulator
<class 'lv_color32_t'> <--- wrong, should be lv.color32_t
<class 'lv_color16_t'> <--- wrong, should be lv.color16_t
<class 'lv_color8_t'> <--- wrong, should be lv.color8_t
<class 'lv_color_hsv_t'> <--- correct, returns lv_color_hsv_t
There are also missing functions
I think these are not missing on latest version. Note that "from_buf" functions are not member functions since their first argument is not the color class.
There is also a naming convention oddity going on.
You are right about that.
color_t is aliased by color32 and it confuses the script because the C function names do not start with"color32" but "color".
In such case the script does not remove the prefix from the member names which results in odd names such as color_to_hsv
etc.
We can probably fix that, I'm not sure when I'll have time to dig into this, feel free to open a PR if you want.
Note that
color1_t
,color8_t
, andcolor16_t
are all missing.
color8_t
, and color16_t
are present on latest version.
lv_color1_t
is indeed missing, but this is expected as it is not used anywhere (as a function argument or return value)
The reason why I bring this to your attention is because when using the binding on an ESP32 this issue occurs.
The result of color_to32()
is lv_color32_t
. Since this is a struct, its pointer value is compared not its contents.
This will return True: c1.to_int() == c2.to_int()
it is not returning the correct value
the integer being output is 0xFF7B63FB
which is not the same as the input color of 0x7B63FB
. I am not able to supply 0xFF7B63FB
to the color_hex
function so I am not sure why the FF is being added.
Using a Freshly compiled master branch on my esp32 and run the same code it doesn't match either the value returned by to_int
is 0x7B1F
which is not the same as 0x7B63FB
the integer being output is
0xFF7B63FB
which is not the same as the input color of0x7B63FB
. I am not able to supply0xFF7B63FB
to thecolor_hex
function so I am not sure why the FF is being added.
After calling in C lv_color_hex(0x7b63fb)
the returned lv_color_t
struct is:
{blue = 0xfb, green = 0x63, red = 0x7b, alpha = 0xff}
Looking at lv_color_hex
in lv_color.h:
#elif LV_COLOR_DEPTH == 32
lv_color_t r;
lv_color_set_int(&r, c | 0xFF000000);
return r;
So looks like it's by design to set alpha
to 0xFF.
Perhaps @kisvegabor could explain the motivation.
I could understand it being added if I was able to set the alpha using color_hex
but I am not able to. I would have to directly set the alpha using the structure.
So looks like it's by design to set alpha to 0xFF. Perhaps @kisvegabor could explain the motivation.
As 32 bit colors can be treated ARGB8888 too we need to the set the alpha to 0xff. Else the color could be rendered as transparent.
But there is no way to set the alpha channel using lv_color_hex
which is the best way to set a color without needing to know the color depth the display has been set to..
There are not all that many displays that support an alpha channel so what exactly is the alpha channel used for in lv_color32_t
?? It's not used by the widgets because they have set_style_bg_opa
to set the alpha channel. (This next question is rhetorical and is only intended to get your thinking) If a display doesn't support alpha channels then how am I able to set the opacity of a widget background? The answer is it does in using layers and via software....
I mean if you think about it how much additional memory would end up getting consumed if you used lv_color32_t as the public color entry point? I would imagine that when I set the color of a widget a buffer of that widget is not what gets stored but instead the single color gets stored and if something changes on the widget then the color gets written to the frame buffer which at that time is when a conversion from the 32 bit color takes place. This also removes the need to set the color at compile time. It can be set at runtime instead. all of the "*_opa" functions can be removed because they would no longer be necessary. lv_color_hex can also get removed. all colors would be stored as ARGB8888 so when making a color to pass to say lv_obj_set_style_bg_color all that would have to be done is passing a pointer to a created color.
I am not a c guy so excuse this probably being incorrect. You will get the general idea tho.
lv_color_t * color = (lv_color_t *)0xFF559933;
lv_set_style_bg_color(obj, color, 0);
// now if I want to change only the opacity of the background
color->alpha = 50;
lv_obj_invalidate(obj);
and internally before the buffer gets written to a color conversion would take place to whatever has been set for a specific display and the converted color is what gets written to the buffer.
It fixes problems with color comparisons because lv_obj_get_style_bg_color would return the same pointer that it was passed in the get or the pointer to the default color. It sucks there is no overloading in C99 as it would be nice to be able to use (r, g, b), (r, g, b, a), RGB, ARGB or lv_color_t to the color functions.
The exact same topic (use ARGB8888 in API) came up in https://github.com/lvgl/lvgl/issues/4059#issuecomment-1521284812 too. Please read the last few comments where I described why it's not as good idea as it seemed at first look.
https://sim.lvgl.io/v9.0/micropython/ports/javascript/index.html?script_startup=https://raw.githubusercontent.com/lvgl/lvgl/e26a46c43c23e91198318659c8214cc34be5cee2/examples/header.py&script=https://raw.githubusercontent.com/lvgl/lvgl/e26a46c43c23e91198318659c8214cc34be5cee2/examples/get_started/lv_example_get_started_3.py&script_direct=a20481aa27d8b699a38d15fccb3dea978c3d5131
lv.color_t methods not returning the types they are supposed to be.
There are also missing functions
This function should have the first 2 parameters flip flopped
a function that should be added is
There is also a naming convention oddity going on.
take the function
lv_obj_set_style_bg_color
as an example. This function when exposed to MicroPython gets added to lv_obj_t as a method and that method name becomesset_style_bg_color
. The complete qualified name isobj.set_style_bg_color
.take the function
lv_color_to32
as an example. This function gets put into thelv_color_t
structure. Strange thing is what the functions name ends up being..color_to32
. This does not follow the API seen everywhere else. There are a bunch of functions that get mapped tolv_color_t
that are like this.Here are the names of the functions that are in lv_color_t
Here are the names of functions and structures as they are in the MicroPython binding. Note that
color1_t
,color8_t
, andcolor16_t
are all missing.You had stated that the structures that do not get added to the binding are the ones that are not used in functions.
but here are the functions... These are also missing as well
The reason why I bring this to your attention is because when using the binding on an ESP32 this issue occurs.
The result is
False
. Originally I had thought it was because I goofed because the code stated the return type should be lv_color32_t. But in this case something is not working right because the return type from that method is anint
. so the question now is why does the equality check fail? it shouldn't.. I don't know if this is an issue in the binding or in lvgl itself. I suspect it might be the binding because the return type is wrong.