ty-porter / RGBMatrixEmulator

A desktop interface to emulate Raspberry Pi-powered LED matrices driven by rpi-rgb-led-matrix
MIT License
89 stars 13 forks source link

LED pixels could potentially look fancier #45

Open jfietkau opened 2 years ago

jfietkau commented 2 years ago

Hi. Remember the guy who replied to your reddit comment from last December? I mentioned I'd be back with a contribution eventually.

I have a fork sitting on my profile that adds a 'fancy' pixel style to the pygame adapter. It's a rough-and-tumble prototype replicating the LED rendering of my MiniHat library with somewhat mixed success. You can try some of the samples with the new mode. The default options should have been changed, but if not, make sure you're using the fancy pixel style and the pygame adapter. image-scroller.py showcases the effect quite nicely while rotating-block-generator.py highlights the weaknesses.

My JavaScript library relies on the "lighten" web canvas blend mode to do a lot of the heavy lifting w.r.t. the glowy effect. I thought I might be able to replicate its look with pygame's BLEND_RGBA_MAX special flag for the blit operation, but it's behaving differently than I expected and I got the most palatable results with no flags at all. If I were to port this to the browser adapter, I'd have to either add the blend_modes package to the dependencies or roll my own blend mode at a probably much increased performance cost. I have not experimented with either possibility yet.

I'm concerned about the hardware requirements. It stands to reason that it would be slower than plain rectangles or circles, but it really is rather noticeable, especially any time a color is first used. I cache the generated per-color pixel images, which improves performance at the cost of RAM usage. I didn't measure it, but generating a pygame surface 9 times the pixel size for every single color will probably balloon pretty quickly. If you want to see this in action, set the pixel size to something that'll strain your hardware (going from 16 to 24 was enough for my setup) and note the times where the performance hitches during image-scroller.py.

In summary, it looks alright but I encountered more stumbling blocks than anticipated. That's why I'm using this current state to check in. Do you think your users would find value in this despite the hardware requirements, and if so, do you want me to continue to expand on it? I reckon porting it to the browser adapter would be doable, but the math here is heavy enough that I'd then most likely want to abstract it into a class of its own. But first I'd love to hear if the maintainer(s) think the result would be welcomed.

Example

ty-porter commented 2 years ago

Sorry it took a while to get back to you.

First of all, the effect looks great. Thank you for contributing! The level of glow might need to be configurable long-term but I think it would be a welcome addition.

That being said, I did get a chance to play with it and lots of colors makes it drag while the cache is cold. Pictures in general are a problem and this would potentially cause issues with some of the scoreboards that make use of this library, most recently NHL:

https://github.com/riffnshred/nhl-led-scoreboard/pull/399


All considered, I really like it and feel like it's worth pursuing if we can get it more performant. Some things off the top of my head:

jfietkau commented 2 years ago

No worries, you're approximately ten months faster at replying than I was. :)

Reducing the color depth sounds like a valid strategy to improve the stuttering for the pygame adapter. If I remember right, the Sense HAT does 5 bit per color channel, evidently they thought that would be the sweet spot. Maybe they're onto something.

The browser adapter is the more interesting one though. It's currently implemented in a way that renders the LED displays in Python using PIL and then streams them to the browser as image files, where they are displayed directly. Was there any specific reason this approach was chosen in favor of using the browser's canvas API for rendering, or was it just the first thing that came to mind? In theory the Python backend could share rendering code with other adapters, but that doesn't seem to be happening right now.

If I could push the raw pixel data through the websocket and do the rendering in the browser (where a very capable and performant 2D rendering API is available), I could almost certainly use an unmodified copy of my JS library, which would likely make future updates easier. Moving the rendering of square and circle pixel styles to the canvas API as well would be, I dare say, pretty easy – and we'd get some freebies along the way, such as compatibility with HiDPI displays. Does that sound like a good idea to you or are you rather attached to the Python-based image rendering?

ty-porter commented 2 years ago

No, definitely not tied to the current implementation! It's admittedly very quick-and-dirty so feel free to replace it if there's a better solution!

jfietkau commented 2 years ago

I have now added the fancy pixel style to the browser adapter and moved the rendering to the client side in the process. Same fork, same dev branch. I neglected to do a "before" performance test, but with the new rendering code and image-scroller.py at a pixel_size of 16, I get about 190 fps for square, 90 fps for circle and 37 fps for fancy on a current Firefox using a crummy Intel GPU on a HiDPI display (meaning the number of canvas pixels is effectively quadrupled).

I pass the pixel color data through the websocket as JSON and include the options each time for ease of access on the other side. In theory this could allow for option changes while the server is running, although in practice the options are currently ignored after the first frame. Let me know if the added options transfer seems redundant or confusing, we can go the template variable way like you did for the target fps if that's important.