Open JPHutchins opened 6 months ago
@JPHutchins thanks for the kind words r.e. NinjaTerm!
50kBps, a.k.a 50kchars/second is quite a lot :-D I've never tested it at such a rate, it's not surprising that it's not working so well. I'm interested to know how much faster we could make NinjaTerm, and if we could make it support such rates! What is your use case at this data rate? I presume that you're not trying to read the data in the terminal at these speeds, so are you just using NinjaTerm for logging purposes? Or are there bursts of data at this rate, and then you want to inspect it in the Terminal Pane during the quiet periods?
My thoughts r.e. speeding NinjaTerm up:
The first thing to do is probably to use Chrome dev. tools (or similar) and analyze the performance under a heavy load. This will show where the bottle necks are and what is taking too long (time spent in each function e.t.c.).
Are you willing to help with this feature? I develop NinjaTerm in my spare time and it would be awesome to have other people contribute to it.
I'm hoping to find some time later this week to create a fake virtual port, dump data in at 50kB/s and see what happens :-)
Yes, consider it my responsibility until some other crazy people want it!
Good idea re: Chrome dev tools and console.log.
Use case is data acquisition + live graphing during product development. For the particular device I tested, the live graphing is presently handled by python application + matplotlib.
I've had good experiences with using com0com to test various serial applications in the past.
Worth noting that TeraTerm can in fact render the 50KBps text... nothing can stop TeraTerm... I briefly worked on a rust-based serial terminal and ran into performance issues with how quickly I was updating Windows Terminal. And issues with ANSI escapes 😄. More generally, I've had issues with "real time" data in web frameworks but I think it's largely due to my poor understanding of React + JS.
A few questions. Is this PWA using web workers? If not, are you open to using web workers? I've never used it, but there's new "shared memory" features that might be compelling.
The idea would be to allow the window OS thread to be mostly idle to handle the event loop and put the parseRxData
work on its own thread. Typically with web workers this sort of approach isn't great because of the de/serialization overhead, but if shared memory is supported then the signaling between the threads could be simple events (e.g. "next buffer is ready to be printed to terminal window and/or graphed").
Good news - with a 1000Hz stream all data is received and rendered. Just not live.
Here's a recording, you can see where the stream starts.
I think the goal here should be to throttle changes to React state. Cause a lot of the stack looks like Reacty virtual DOM / GUI state updates that probably don't need to be happening so fast (at ~1ms interval 🤣).
Probably will need to understand if anything in this method could cause React to think that the DOM needs an update. Perhaps this.terminalRows
is being listened to and causes GUI update? If so, it could be throttled so that it sees batches of characters.
I think that the cause of frequent component re-renders may be the makeAutoObservable(this)
on the TerminalRow
.
If I can confirm that TerminalRow
is re rendering on each character, then I'll look into triggering TerminalRow
renders on a clock rather than on receipt of characters.
Hi @JPHutchins , wow great you've already made a good start on this.
Yes I would be open to using web workers if they would help (although I think the discussion has moved on a bit since then).
Yes throttling the changes to the UI would be a great first step, might be able to get huge gains from this (might be all we need) for relatively low effort.
Yes, this.terminalRows
is being listened to, as you found out each row in an "observable" and can make React re-draw the UI. As a basic rundown, MobX is used to create the "model" (non UI bit), and the React UI re-renders when bits of the model changes. In particular, if you look in SingleTerminalView.tsx
there is a FixedSizeList
component, and terminal.filteredTerminalRows
is passed in. This fixed size list only renders the rows currently in view, so performance doesn't take a hit when there are 1000's of rows being rendered into the DOM but only 20 of them visible on the screen.
The fixed size list is a 3rd party component, so I don't think we want to change how that works. One idea might be to create a new array in the model called "displayedRows". The fixed size list could look at that instead of the filtered terminal rows. After some timeout (say, 0.5s), you could make displayedRows
equal to filteredRows
.
Actually, even better, since filtered rows is only a UI concern, you could just update filtered rows on a timer based on the raw terminal rows.
Woops, I've just noticed that TerminalRow.tsx
and SingleTerminalChar
in the views directory should actually be in the model directory. Will update that at some point, sorry if that confused you.
Expected Behavior
Terminal streams arbitrarily "fast" data streams at ~60Hz refresh rate without blocking the event loop.
Current Behavior
I LOVE this project and see a bright future ahead! Generally, we write custom parsers per project, you know the deal 🙄.
Presently we have a rather demanding one that streams up to 5 channels at 1000Hz. Like 50KBps overall data rate (400,000 bps) - I know that normal BAUD rates can't support that, it's USB CDC ACM. This sorta stream seems to cause this PWA to flop - it updates somewhat in the terminal scroll, but it's batched heavily, then just stops. The app becomes unresponsive and the tab has to be closed.
Steps to Reproduce
Run a 1000Hz + ASCII stream. With or without graphing.
Possible Solution (Not obligatory)
I'd need to look into how the streams are being buffered by this app.
Your Environment