Closed davidswinegar closed 4 years ago
@davidswinegar I ran your branch for a different scenario where I'm having lots of big strings. I can confirm that your changes improve GC usage a lot. In my case, it reduced the average processing time around 50%.
Here's an example of what I was seeing before your changes:
The orange boxes are all GC alls (mostly minor, but there are some major GC calls too)
And here's after:
Notice there are a lot less calls to GC
Sorry to ask for more, but please check performance for short strings too -- in my microbenchmarks TextDecoder
added substantial overhead for them. (Might be fine, though -- outside of microbenchmarks it might not add up to much. I guess we can look at profiles for representative numbers?)
@MatthewSteel Updated this to use one TextDecoder per message definition - I saw the same thing from benchmarks. Since we have a small number of total topics I ran it with ~800 and it seemed to be pretty quick - and we'd likely have have only a fraction of those topics actually enabled.
This should take the same runtime but should reduce memory pressure and garbage collection when reading ROS strings by not doing string concatenation on every character.
Test plan: added test for long strings