Closed MouriNaruto closed 3 years ago
Unbelievable!
But what is the trick? :slightly_smiling_face: It seems you just copy the image rendered by LVGL into your buffer:
memcpy(g_pixel_buffer, color_p, g_pixel_buffer_size);
It's what most of the drivers do.
Unbelievable!
But what is the trick? 🙂 It seems you just copy the image rendered by LVGL into your buffer:
memcpy(g_pixel_buffer, color_p, g_pixel_buffer_size);
It's what most of the drivers do.
First, I create the Windows GDI bitmap object only once in this implementation and get the memory address for writing the pixels. So, I can reuse the bitmap object. It will reduce most of the context switching overhead.
Second, I use rounder_cb
to make copy the LVGL's pixel buffer to the bitmap object's pixels address easier.
In general, I had tried my best to reduce the memory copy operations. But I can't use my zero memory copy operation implementation for the universal driver because I can't limit the user only use the 32bpp color depth. (But in the repository for Windows desktop application, I can do that.)
Thanks! I'm surprised that memory copy is so expensive even on PC.
Differences from the old native Windows driver (win_drv)
Benchmark
The SDL driver
The new native Windows driver