PlummersSoftwareLLC / NightDriverStrip

NightDriver client for ESP32
https://plummerssoftwarellc.github.io/NightDriverStrip/
GNU General Public License v3.0
1.32k stars 213 forks source link

Web UI: Websocket implementation of ColorDataServer #356

Open davepl opened 1 year ago

davepl commented 1 year ago

We currently have a tested and working socket server in ledviewer.h that can serve up the contents of the matrix to a client so they can render a preview and so on.

The problem is that to connect to it from a web page via js, it needs to be a websocket. So the feature here is to expose the colordata on a websocket, and ideally, produce a small demo page that works with it.

robertlipe commented 1 year ago

Funny. I ran into this last weekend when I tried to prototype an answer to a draft I'd composed last week when I chickened out from asking"

Oh. Is there a tool that opens a socket on 42153(?), slurps up the pixel buffer, and blasts them into a on a browser, suitable for screenshots (e.g.)? If so, does it handle animations and keep them packet-aligned during a read? For review, it would be nice to SEE a proposed animation/effect before fondling the actual bits.

I wanted to be able to create screen captures for review and doc. It'd be nice to have itty-bitty screen caps presented in the web interface that allows you to turn them off and on, too.

I crashed into the message I think you're implicitly describing. I thought web sockets were sockets implemented FOR use in web browsers, not a completely different type of sockets. That was when I realized I was out of my habitat and moved along.

rbergen commented 1 year ago

For whoever chooses to pick this up, there is a pretty comprehensive Random Nerd Tutorial on implementing a WebSocket server here: https://randomnerdtutorials.com/esp32-websocket-server-arduino/. It uses the WebSocket capability of ESPAsyncWebServer, which is the webserver this project already uses to serve the on-device website. As the tutorial shows, ESPAsyncWebServer takes care of (almost) all of the "Web" in WebSocket, and brings the implementation work down to handling the data.

In case this does not end up working (for instance, because performance is too poor) and a raw implementation is deemed necessary, then the "authoritative guide" can be found on MDN: https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers.

KeiranHines commented 11 months ago

@rbergen I have proof of concept implementation of a websocket here. If you pulled that branch and upload it, the Web UI now has a console log that mirrors the /effects endpoint, pushed whenever the effect is changed via the IR remote. I have a few questions I am sure you can help with.

For ws design:

  1. Was there a preference for websocket features. I would like to see at least effect update pushes, I am not sure what else people might want.
  2. Was there a preference for playload standardization? I would prefer JSON with {topic: <str>, payload: <Object>}
  3. Depending on 1 and 2, can we work to define the topics similar to the REST_API.md doc, just so we all agree.
    • personally I would prefer a few smaller topic/payloads rather than one big topic/payload e.g. the current payload I send is the /effects endpoint. I really don't need to be sending all the effects every time. We could make a currentEffect topic and a effectUpdated endpoint with the former just pushing the current effect and interval info when the current effect changes. The later pushing just the effect that changed when an effects settings change. Any discussion around how granular to be, and how much to just keep it simple stupid would be welcome.

For Implementation: As for backend implementation, I have just hooked into the IR Remote at the moment because I ran into an issue getting the g_ptrSystem in the effectmanager.

  1. What are the overall design goals around having parts of the backend be able to fire messages on the socket vs having another "websocketmanager" style class the manages when and how to send messages.
  2. If we go with the global access to publish a message, where would be the better place to hook into that isn't the remote.

Having said all that, there is no hurry to reply to this any time soon. I will be AFK most of December so other than replying to conversation threads, nothing substantial will be done until the new year. Plenty of time to have a think and decide on a direction.

Cheers, Keiran.

rbergen commented 11 months ago

@KeiranHines Nice to see you want to get going with this!

  1. Was there a preference for websocket features. I would like to see at least effect update pushes, I am not sure what else people might want.

The only preference from a project perspective is the one described in the issue you commented on: a WebSocket implementation of ColorDataServer, so the web UI can show what an effect would look like, without having to hook up actual LEDs/panels to the board.

As I've mentioned before myself, using them to "push" effect switches that take place on the board to the UI (because of IR remote interactions or otherwise) makes sense to me too.

  1. Was there a preference for playload standardization? I would prefer JSON with {topic: <str>, payload: <Object>}

I think the payload format should be largely decided by the front-end, because that's what the WebSocket(s) is/are for. I can imagine that we actually implement more than one WebSocket (maybe one for the ColorDataServer, and one for everything the UI currently supports) with different formats; with color data the size of the payload becomes a factor as well.

personally I would prefer a few smaller topic/payloads rather than one big topic/payload

That makes sense. It's not uncommon that push and pull scenarios use different chunk sizes.

Any discussion around how granular to be, and how much to just keep it simple stupid would be welcome.

The suggestions you made for the two scenarios you described make sense to me. I think we just need to take this on a case-by-case basis - and certainly on something "new" like this, be willing to review earlier decisions at a later time.

For Implementation: As for backend implementation, I have just hooked into the IR Remote at the moment because I ran into an issue getting the g_ptrSystem in the effectmanager.

You can use g_ptrSystem in EffectManager, but not in effectmanager.h - that creates an unsolvable circular dependency. If you want to define member functions that use g_ptrSystem you have to put them in effectmanager.cpp, and only declare them in effectmanager.h. There already are examples of such member functions in effectmanager.cpp at the moment.

  1. What are the overall design goals around having parts of the backend be able to fire messages on the socket vs having another "websocketmanager" style class the manages when and how to send messages.
  2. If we go with the global access to publish a message, where would be the better place to hook into that isn't the remote.

I think it makes sense to:

Having said all that, there is no hurry to reply to this any time soon. I will be AFK most of December so other than replying to conversation threads, nothing substantial will be done until the new year. Plenty of time to have a think and decide on a direction.

I hope what I mentioned above is a start. In the interest of transparency: I have thought about picking up the back-end part of this myself, but concluded that's pointless unless we have a front-end to do anything with it - and indeed have an agreement about content and format of content sent (and possibly, received).

KeiranHines commented 11 months ago

That all sounds good to me. If you want to collaborate on this if you'd like. What are your thoughts on having the following sockets. (naming can be changed). Each one could easily be its own feature added over time.

  1. /effects pushes effect related updates similar to the /effects api endpoint, only push data that has changed in JSON. The frontend can unpack that data and merge it to the current state.
  2. /colorData: two way data to send and receive colorData, similar to the current TCP/IP socket but also with the abiliy to preview the current frame in the frontend
  3. /stats (optional) pushes stats updates similar to the stats api endpoint. this one probably provides the least value in terms of improving existing functionality.
  4. /debug or /notification (optional) a wrapper around the DebugX macros to send the backend log output to the UI for debugging. Possibly also with the option to send notifications to the frontend if that was needed for some reason other than debugging.
robertlipe commented 11 months ago

As long as we're using ESPAsyncWebServer as the foundation for anything "web" I'd

Is that immutable? There's a perfectly lovely (actively maintained) https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/protocols/esp_http_server.html

Then the AsyncSockets layer could become plain ole sockets https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/lwip.html

I know those changes aren't trivial, but if we're going to be an ESP project, I'd rather lean into that and get rid of anything with "Arduino" in the name that's largely just middleware with additional layering.

davepl commented 11 months ago

We’re married to Ardiuno, and I have several children with it already.

Why would we try to be an ESP-IDF project instead? We’re dependent on a LOT of libs, which I assume are all in turn dependent on Arduino anyway.

Just curious! Sure, it’s lighter weight and closer to the hardware, perhaps, but what tangible benefit would a change deliver?

On Nov 27, 2023, at 4:56 PM, Robert Lipe @.***> wrote:

As long as we're using ESPAsyncWebServer as the foundation for anything "web" I'd

Is that immutable? There's a perfectly lovely (actively maintained) https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/protocols/esp_http_server.html

Then the AsyncSockets layer could become plain ole sockets https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/lwip.html

I know those changes aren't trivial, but if we're going to be an ESP project, I'd rather lean into that and get rid of anything with "Arduino" in the name that's largely just middleware with additional layering. — Reply to this email directly, view it on GitHub https://github.com/PlummersSoftwareLLC/NightDriverStrip/issues/356#issuecomment-1828890281, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA4HCF6UE46ZWFJMZF2QBXTYGUZEFAVCNFSM6AAAAAA2G2P4NOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRYHA4TAMRYGE. You are receiving this because you authored the thread.

robertlipe commented 11 months ago

On Mon, Nov 27, 2023, 8:02 PM David W Plummer @.***> wrote:

We’re married to Ardiuno, and I have several children with it already.

I figured that wouldn't go over well.

Why would we try to be an ESP-IDF project instead? We’re dependent on a LOT

of libs, which I assume are all in turn dependent on Arduino anyway.

Breaking that dependency tree would indeed be a long pull.

... Long enough that I've given some non-trivial thought to the idea of just taking everything from approximately effect manager up (down? Toward the bulbs and away from interrupt handlers) and moving that code to something like nuttx or zephyr. I'd either embrace one SOC family (ESP is pretty nifty) or go portable and run on STM or BL or Pi or whatever. Right now, we don't really get the advantage of vendor -maintainer code (esp-idf).or the portability of leaving At mega behind .

Just curious! Sure, it’s lighter weight and closer to the hardware,

perhaps, but what tangible benefit would a change deliver?

Those are two pretty solid reasons!

Many of the libraries we depend upon are abandoned.

The integration with the build system is painful. Being proud that a single threaded, single core python process implements dependency handling without using GCC but taking 30 seconds to rebuild the deps graph every time you touch one .cpp file is not very awesome.

To cater to 8 bit systems with few resources, it's pretty crazy with resources . Watch how many mallocs happen in even simple String ops. Up and down the stacks (http,. String, networking, etc) code makes copies willy nilly. Even amongst the Arduino die-hards, String is.pretty widely panned. Think about the recent parallel debuggung exercises we had with web silently being unable to serve packets above some size and some serialization issue because a lower library was tossing errors. Neither reflected well upon those convenient libs we were using.

A full build for integration takes something like 20 minutes and about 35Gb (!) Because it checks out and rebuilds and hopes to throw away dozens of copies of the same code.

Tons of c89 code that would be lighter In modern c++. As an example, I've experimented with debugX turning into std::format and it's pretty nifty. I have patches foe c++20 pending.

Several of the libraries we rely upon are shackled by compatibility with ancient, tiny hardware. I've tried to help the FastLED group and they're just unable to move forward because AtMega and 8266 have them deadlocked. Things like support for RGBW strips are jammed up behind losing interrupts because serial and strips each need 100%.

There's more, but there's no reason to litigate it here. There are solid reasons to keep it and changing is hard without a lot of end-user benefits. (I'm pretty sure it could be made lighter so fewer low memory issues...) I'm just saying it's not a casual thought I've had to saw the code apart.

I'm also quite aware of how many failed/abandoned blinky light projects and products there are around and the "14 competing standards" xkcd meme...

  • Dave

On Nov 27, 2023, at 4:56 PM, Robert Lipe @.***> wrote:

As long as we're using ESPAsyncWebServer as the foundation for anything "web" I'd

Is that immutable? There's a perfectly lovely (actively maintained)

https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/protocols/esp_http_server.html

Then the AsyncSockets layer could become plain ole sockets

https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/lwip.html

I know those changes aren't trivial, but if we're going to be an ESP project, I'd rather lean into that and get rid of anything with "Arduino" in the name that's largely just middleware with additional layering. — Reply to this email directly, view it on GitHub < https://github.com/PlummersSoftwareLLC/NightDriverStrip/issues/356#issuecomment-1828890281>, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AA4HCF6UE46ZWFJMZF2QBXTYGUZEFAVCNFSM6AAAAAA2G2P4NOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRYHA4TAMRYGE>.

You are receiving this because you authored the thread.

— Reply to this email directly, view it on GitHub https://github.com/PlummersSoftwareLLC/NightDriverStrip/issues/356#issuecomment-1828942107, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACCSD3ZA5CPL3CZPMLLCCEDYGVA2XAVCNFSM6AAAAAA2G2P4NOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRYHE2DEMJQG4 . You are receiving this because you commented.Message ID: @.***>

rbergen commented 11 months ago

Disclaimer: I haven't read the whole exchange - I hope to catch up after work today. I'm just quickly responding to one question that I have an opinion about.

Is that immutable?

Not per se, but there has to be a darned good reason to migrate. From a development perspective, I actually find ESPAsyncWebServer very pleasant to work with, in part exactly because it integrates well with other Arduino projects; ArduinoJson very much being one of them.
Looking at the examples in the esp_http_server documentation the code in our webserver implementation would have to explode to provide the same functionality we now do.

rbergen commented 11 months ago

@KeiranHines

/effects pushes effect related updates similar to the /effects api endpoint, only push data that has changed in JSON. The frontend can unpack that data and merge it to the current state.

Makes sense at this level of discussion. We'd have to very clearly define (to the JSON object level) what data we do and don't send at any one point.

/colorData: two way data to send and receive colorData, similar to the current TCP/IP socket but also with the abiliy to preview the current frame in the frontend

I think that last thing is the main reason to make a WS implementation of ColorDataServer in the first place.

/stats (optional) pushes stats updates similar to the stats api endpoint. this one probably provides the least value in terms of improving existing functionality.

You're kind of saying it yourself already, but I don't see what the added value is of pushing this over pulling it. Unless we start raising "alerts" for certain situations, but that's a thing we have nothing in place for yet at any part of our infrastructure/codebase.

/debug or /notification (optional) a wrapper around the DebugX macros to send the backend log output to the UI for debugging. Possibly also with the option to send notifications to the frontend if that was needed for some reason other than debugging.

I'm not sure I really see this one working yet, either. The problem is that for the Web Sockets to work a lot of stuff already has to function well, and I think common types of malfunction we commonly see could get in the way of the debugging that would explain the malfunction actually reaching the web UI user.

About collaborating: I'd love to. I think the first thing to do is agree on the (data) interface between back-end and front-end, so we can then work on the implementations in parallel.

KeiranHines commented 11 months ago

/debug would more be for debugging effect level issues. For example if you wanted to to fix a logic error in an maths heavy effect you could debug out your calculations and use the colour data to reconstruct the effect frame by frame.

For /effect I'd start by using same json keys as the API endpoint. I'd say current effect and the interval keys should update every time the effect changes. The intervals should be sent any time the interval setting is changed. The effects list should only push changes when an effects setting that is in that object is changed. If effect indexes change I don't know if it would be best to send all effects again or attempt just to send those that change and the effectName can be used as a key to reconcile the list.

rbergen commented 11 months ago

Hm. Then we would need to distinguish between debug logging we'd like to end up in the web UI, and the rest. I'm still leaning to finding it reasonable to ask of a developer that they hook up their device to their computer via USB, or telnet into the thing - that's also already possible.

Concerning /effect I think I can get going with that based on what you've said so far. With regards to the effect index changes, I think I'll actually only send a message with an indication which indexes changed. I think it's a scenario where it's not excessive for the web app to pull the whole effect list in response - maybe unless it finds the message concerns an index change it just triggered itself, for which the indexes should be enough. Using display names as keys is something I really don't want to do.

KeiranHines commented 11 months ago

Instead of sending what indexes change is it better to just send an effectsDirty flag or similar. If I said flag in the front-end I will just fit the API endpoint again and refresh everything.

rbergen commented 11 months ago

I was thinking that pulling the effect list from the API by a particular browser (tab) is a bit overkill if the effect order change was initiated by that same browser (tab). That's a scenario the web app could identify by comparing the (moved from and moved to) indexes in a WS message to the most recent effect move performed by the user using the web app, if any.

If this is not something you'd do and you prefer to pull the effect list anyway, then an "effects list dirty" flag would indeed be enough.

robertlipe commented 11 months ago

Re: replacing the Arduino code in general - I didn't honestly expect that discussion to go anywhere. No reason to spend another keypress on it. If I reach a breaking point, I'll do something about it.

I agree that requiring a physical connection for debugging isn't unreasonable. It also helps enforce a little bit of security; we can get away with fewer checks if you have to have physical access to the device anyway. Your IoT network still needs to be "secure enough", of course. Dave's neighbors reprogramming his holiday lamps could be frustrating.

For debugging, though, I'd love to be able to collect and step through frames drawn to a real computer, even if that frame is a strip. They can be a flip-deck of GIFs without compression in the dumbest possible way or xpm files (WEBP? PNG? BMP?) or whatever. It'd be super to be able to view what's being sent to a display (perhaps without even having any LEDs attached) vs. what actually shows up at the display.

This is (yet another) idea I started that I didn't get very far with. I was just going to have the ESP collect "screen dumps" of the g()->LEDS[] on every (?) Draw() and then hoover them to the computer via the web server or a dedicated scp mutant that handled multiple files or let the ESP write th shell script with the right calls to curl or something.

"Not all things worth doing are worth doing well."

Re: telnet debug logging - I've considered introducing a slight variation of our debugging that buffers a few (tunable) kilobytes into a curricular queue so that the valuable startup chatter doesn't get lost before you can get a connection going. It could buffer the first N writes and only start throwing it away if a connection hasn't been opened in the first few seconds. That allows a 'pio upload -e mesmerizer && telnet foo' to do something reasonable and not lose that startup info. It can lose the .2Hz "I wrote some frames" messages instead as those are highly temporal. Oh, and I have some commands in the works that will re-display some of that startup stuff on demand.

I'm a toolmaker, so improving our own quality of life is something I care about. Screen capture/display and better logging are pretty important, IMO.

On Tue, Nov 28, 2023 at 2:30 PM Rutger van Bergen @.***> wrote:

I was thinking that pulling the effect list from the API by a particular browser (tab) is a bit overkill if the effect order change was initiated by that same browser (tab). That's a scenario the web app could identify by comparing the (moved from and moved to) indexes in a WS message to the most recent effect move performed by the user using the web app, if any.

If this is not something you'd do and you prefer to pull the effect list anyway, then an "effects list dirty" flag would indeed be enough.

— Reply to this email directly, view it on GitHub https://github.com/PlummersSoftwareLLC/NightDriverStrip/issues/356#issuecomment-1830673685, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACCSD37PMPGHBUHJ3T4WFZTYGZCVLAVCNFSM6AAAAAA2G2P4NOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZQGY3TGNRYGU . You are receiving this because you commented.Message ID: @.***>

KeiranHines commented 11 months ago

I was thinking that pulling the effect list from the API by a particular browser (tab) is a bit overkill if the effect order change was initiated by that same browser (tab). That's a scenario the web app could identify by comparing the (moved from and moved to) indexes in a WS message to the most recent effect move performed by the user using the web app, if any.

Currently by memory I do pull the full list again. Mostly because in the worst csse, moving the last effect to be the first would mean every effect changes index. Granted browser side this wouldn't be a hard update, but I thought it was safer just to sync.

If this is not something you'd do and you prefer to pull the effect list anyway, then an "effects list dirty" flag would indeed be enough.

I think I'd prefer the flag, that way at least the code path for updating is the same for all clients and there is less chance of desync.

KeiranHines commented 11 months ago

@rbergen just a thought, for the colorData push server to browser, I am assuming we will need to supply the color data array and an x,y for width and height of the display so the UI can render out the correct scale. Did you have a preference for this I would just default to json, but that is open for discussion.

{
  data: []
  width: int
  height: int
}
rbergen commented 11 months ago

I think I'd prefer the flag, that way at least the code path for updating is the same for all clients and there is less chance of desync.

Fair enough. Dirty flag it is.

Did you have a preference for this I would just default to json, but that is open for discussion.

As I said before, we're adding the Web Socket stuff for consumption by JavaScript, for which JSON is the lingua franca.

KeiranHines commented 11 months ago

Ahh there may have been a bit of poor communication on my behalf. I assumed JSON. I was more meaning the JSON Schema. Having more of a think about it I think the option above is less ideal, as you'd always send back the same width/height which would be a waste. It may be better to have a way to get the globals (width/height) from an API endpoint as they are static, then the colorData socket can just be a flat array of width*height integers being the colors of each pixel. Alternatively the socket could return a 2D array being each row of the matrix. Or if there is a third option that is easier/more native for the backend to send I would be happy for that. Id prefer to move as much of the computational load for the colorData from the device to the browser so the impact on device performance would be minimal.

rbergen commented 10 months ago

Ok, I'll add a "dimensions" endpoint in the WS ColorDataServer context. Concerning the actual color data, I was indeed thinking of a one-dimensional JSON array of length width*height with pixel colors in them. I'll probably use the same format we use for CRGB values elsewhere (i.e. the 24-bit ints). If the bit-level operations on the back-end to put the ints together turn out to be too expensive then plan B would be sending triplets (probably in arrays again) of separate R, G and B values for each pixel.

rbergen commented 10 months ago

@KeiranHines So, I've managed to put an initial version of my web socket implementation together. It's in the colordata-ws branch in my fork; the "web socket" meat of it is in the new websocketserver.h. I've deviated a little from the format you mentioned in one of your earlier comments. Concretely:

The overall thinking behind this JSON format is that I'd like to keep the possibility to combine/bundle messages in the future. Of course, if this flies in the face of what works for you, I'm open to alternative suggestions. In any case, I'm going to pause the implementation here until you've been able to do some initial testing with it - and we either have confirmation that it works, or otherwise have an indication how/why it doesn't.

Until then, happy holidays!

KeiranHines commented 10 months ago

Thankyou! Frame looks like it will plug straight into the UI test setup I have. The effect endpoint looks more than adequate I don't see any issues with it from the details provided. I'll take a look at both in the new year.

Happy holidays to you as well and anyone else following along.

KeiranHines commented 10 months ago

@rbergen minor update.

I have started to integrate the frames socket. Its available on my colordata-ws branch. I have noticed periodically I seem to drop connection to the socket. I have not yet implemented a reconnect logic. I have also had three times now where I have got invalid json, normally missing a ']' at a minimum. Finally on every effect I have tested so far I have noticed the mesmerizer will restart periodically.

I was wondering if you could try and replicate the issues on your side. I have not sure yet if its because of my development environment or something is the socket implementation.

Any other feedback is welcome on the UI side while you are there. I am noticing some performance and render quality issues on the browser side I aim to work on those as I go but that's next years problem.

rbergen commented 10 months ago

@KeiranHines Thanks for the update! I'll test with your colordata-ws branch when I have the time - which may also be a "next year problem" for me. Looking at the behaviour you're describing (and particularly the inconsistencies in that behaviour) I think we may be pushing the board beyond breaking point with the JSON serialization of the colour data. Without wanting to abandon that approach already, I'd like to put the question on the table if it would be feasible for you to consume a frame data format that is closer to the "raw bytes" that the regular socket sends out. Maybe an actual raw "binary" packet, or otherwise a simple Base64-encoded string thereof?

KeiranHines commented 10 months ago

I don't see why I wouldn't be able to process a raw "binary" packet. I have dealt with base64 encoded images before so that could also work. Currently I am using a canvas to render the 'image' of the matrix. Its underlying data structure is simply a uint8 array where every 4 indices are a rgba value for the next pixel. So I am sure I can transform anything you send to that.

It may also be worth looking to rate limit the frames sent potentially. maybe down to say 10fps just to see if that reduces the load at all.

robertlipe commented 10 months ago

I'm not totally sure of the context here, but a JSON-serialized version of a binary packet is:

A) inherently 100% larger than the original data, because it has to be a copy. You can't just point the DMA controller at a JSON string to shove out the 2812s. There's always another copy involved. B) Base 64 is ~33% larger than a comparable binary. Gzipping gets some of that back. That's not free.

So I dont know Rutger's justification, but we're already at a breaking point for lots of RAM consumption cases, and that's not likely to help. The colorserver data path can be pretty harsh. In fantasy-land, you'd point the I2S/RMT/SPI (what-ev-ah) DMAC directly at the received frame from the color server receiver and avoid picking the data up, groping it, and putting it back down again without even sullying the SoC's DCACHE.. In base64-land, that's just not happening.

I don't even KNOW this is the problem under the microscope here; it's just the kind of thing that systems software people fret about.

RJL

On Sat, Dec 30, 2023 at 4:58 AM Keiran Hines @.***> wrote:

I don't see why I wouldn't be able to process a raw "binary" packet. I have dealt with base64 encoded images before so that could also work. Currently I am using a canvas to render the 'image' of the matrix. Its underlying data structure is simply a uint8 array where every 4 indices are a rgba value for the next pixel. So I am sure I can transform anything you send to that.

It may also be worth looking to rate limit the frames sent potentially. maybe down to say 10fps just to see if that reduces the load at all.

— Reply to this email directly, view it on GitHub https://github.com/PlummersSoftwareLLC/NightDriverStrip/issues/356#issuecomment-1872504830, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACCSD356QGHVSUFKZTFXOATYL7XULAVCNFSM6AAAAAA2G2P4NOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZSGUYDIOBTGA . You are receiving this because you commented.Message ID: @.***>

rbergen commented 10 months ago

@KeiranHines Yes, the frame rate limit also crossed my mind. I am just exploring options how to decrease the load if that indeed turns out to be the problem. In any case, I need to first see what logging the board spits out when the web socket is connected before we can come to any conclusion on how to move forward.

@robertlipe I'm aware of all that. :) We're trying to cross the low-level system (raw bytes galore)/web browser ("everything" is JSON) boundary here, and we're finding out what works and what doesn't as we go along. Usually this sort of exploratory development is not (or less) visible because it happens on one bench/desk, but because my back-end stuff needs Keiran's front-end stuff to be asked to do anything, it is very visible here.

I actually split off the color data web socket from the effect event web socket from the get-go to be able to change the data format for one independently of the other. As in: I was already thinking we might need to review some interface architecture decisions along the way.