Closed androiddrew closed 2 months ago
This appear to happen also in my ESP32 build
Build:Mar 27 2021
rst:0x1 (POWERON),boot:0x18 (SPI_FAST_FLASH_BOOT)
SPIWP:0xee
Octal Flash Mode Enabled
For OPI Flash, Use Default Flash Boot Mode
mode:SLOW_RD, clock div:1
load:0x3fce3810,len:0xf3c
load:0x403c9700,len:0x4
load:0x403c9704,len:0xbb4
load:0x403cc700,len:0x2c28
entry 0x403c98a0
LVGL MicroPython 1.23.0 on 2024-09-01; Generic ESP32S3 module with Octal-SPIRAM with ESP32S3
Type "help()" for more information.
>>> from micropython import const
>>> _WIDTH = const(480)
>>> _WIDTH
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name '_WIDTH' isn't defined
Ah it appears that if a const has a _ it is hidden https://docs.micropython.org/en/latest/library/micropython.html#module-micropython
That is correct. there is no point to using const
if you don't have an _
in front of the variable name, it ends up getting treated like a normal variable. The purpose of the const
is to reduce memory use by doing a replacement of that variable name in the source code with the value of const
. What trigger this to happen is having an _
at the beginning of the variable name, Variables that have the _
and use const
are only made available to the source file in which they are written into. It doesn't matter where they are placed in the file either, though to make it clear as to what is happening it is always best to place them at the module level near the top.
This is wrong and will not work..
from micropython import const
class SomeClass(object):
_SOME_CONST = const(10)
def some_method(self):
some_local_variable = self._SOME_CONST
so is this
from micropython import const
class SomeClass(object):
_SOME_CONST = const(10)
def some_method(self):
some_local_variable = SomeClass._SOME_CONST
This however will work.
from micropython import const
class SomeClass(object):
_SOME_CONST = const(10)
def some_method(self):
some_local_variable = _SOME_CONST
This will also work and is the reason why I said to just place them at the module level near the top.
from micropython import const
class SomeClass(object):
_SOME_CONST = const(10)
class AnotherClass(object):
def some_method(self):
some_local_variable = _SOME_CONST
You would not think the last one should work but what you have to remember is this is done when the source file is read, not when it is being run. when MicroPython reads the file it removes the variable from the code so it is never seen by the python interpreter. Remember the whole point to this is to save memory use...
The best way to declare a constant that is easily understood as to what is happening is doing the following..
from micropython import const
_SOME_CONST = const(10)
class SomeClass(object):
def some_method(self):
some_local_variable = _SOME_CONST
class AnotherClass(object):
def some_method(self):
some_local_variable = _SOME_CONST
Ah
Yeah, so @kdschlosser ...I don't know why I am still hitting a Segmentation fault (core dumped) though when I build the unix port and try to run your example. It appears to be happening around the label = lv.label(scrn)
line. I've never used gdb before, is there a simple way to get this the debug symbols built for the binary?
Ok got a debug build by putting a breakpoint at https://github.com/lvgl-micropython/lvgl_micropython/blob/main/builder/unix.py#L248 and then cmd_.insert(0, "DEBUG=1")
before continuing. @kdschlosser Looks like I am hitting some nvidia issues on my Ubuntu box. You by chance aren't using an Nvidia GPU for SDL are you?
(gdb) run ./project0/test_unix_sdl.py
Starting program: /home/toor/experiments/lvgl-upy-examples/builds/lvgl_micropy_unix_3 ./project0/test_unix_sdl.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Breakpoint 1, main (argc=2, argv=0x7fffffffd818) at main.c:467
467 int main(int argc, char **argv) {
(gdb) c
Continuing.
[New Thread 0x7ffff0400640 (LWP 313009)]
Thread 1 "lvgl_micropy_un" received signal SIG34, Real-time event 34.
__GI___ioctl (fd=9, request=3222292009) at ../sysdeps/unix/sysv/linux/ioctl.c:36
36 ../sysdeps/unix/sysv/linux/ioctl.c: No such file or directory.
(gdb) backtrace
#0 __GI___ioctl (fd=9, request=3222292009) at ../sysdeps/unix/sysv/linux/ioctl.c:36
#1 0x00007ffff4bc8080 in ?? () from /lib/x86_64-linux-gnu/libnvidia-glcore.so.535.183.06
#2 0x00007ffff4bccb88 in ?? () from /lib/x86_64-linux-gnu/libnvidia-glcore.so.535.183.06
#3 0x00007ffff47223c6 in ?? () from /lib/x86_64-linux-gnu/libnvidia-glcore.so.535.183.06
#4 0x00007ffff4c0932d in ?? () from /lib/x86_64-linux-gnu/libnvidia-glcore.so.535.183.06
#5 0x00007ffff4723e39 in ?? () from /lib/x86_64-linux-gnu/libnvidia-glcore.so.535.183.06
#6 0x00007ffff66c0cff in ?? () from /lib/x86_64-linux-gnu/libGLX_nvidia.so.0
#7 0x00007ffff6672850 in ?? () from /lib/x86_64-linux-gnu/libGLX_nvidia.so.0
#8 0x00007ffff6672f3f in ?? () from /lib/x86_64-linux-gnu/libGLX_nvidia.so.0
#9 0x00007ffff7ffd040 in ?? () from /lib64/ld-linux-x86-64.so.2
#10 0x0000000000000000 in ?? ()
I am running an Nvidia GPU and I have run the unix port without issue. I am running Ubuntu in a virtual environment tho so I am not dealing with Linux Nvidia drivers.
If I could see the code \in the ./project0/test_unix_sdl.py
file that would be a big help.
Depending on the size of the "display" you are creating you may have to increase the heap allocation as well.
This can be set one of 2 ways. The first is at compile time using the --heap-size={size in bytes}
parameter and optionally you can set it when you run the binary using micropython -X heapsize={size in bytes}
I don't recall offhand what I set the default to. This is how to work out basically... if you are creating a display that is 1920 x 1080 then you are going to need (1920 x 1080 x 3 x 2) + 2048
in heap. That's 12,441,600 for the frame buffers and 2048 for running MicroPython and LVGL. I have personally never tried to run LVGL with a display that is 1920 x 1080 and due to the lack of using hardware acceleration it's performance would be less than optimal I am sure.
Even at 800 x 600 resolution you are looking at 2,880,000 bytes for the 2 frame buffers. I want to say that I set the default to 4,194,304 bytes. (4mb)
One of the things with the Unix port is it is only going to be able to run it on the machine it was compiled on. There are simply too many variables that will allow you to move the binary between machines.
This is a strange one. I built the unix port using
When tried to run the example in the build examples I got a
Segmentation fault (core dumped)
. When I started the REPL and began running the commands one by one I noticed that it doesn't appear to allow _ as the first character in a variable name.