zepp-health / zml

https://zepp-health.github.io/zml/
Apache License 2.0
7 stars 3 forks source link

Copy incoming chunks to a buffer instead of iterative concats #1

Closed DotIN13 closed 10 months ago

DotIN13 commented 11 months ago

The current implementation of chunk reception stores all chunks in memory as a list, and concat them together one by one. Since Buffer.concat essentially create new Buffers along the way. The total memory storage complexity is exceedingly high.

For a full payload of n chunks, concat will be performed on each of the chunks, creating Buffers length of 1, 2, 3, ..., n chunks, resulting in the complexity of O(n^2).

This pull request instead create a single Buffer upon reception of the first chunk, and copy incoming chunks to the main buffer. Ideally, GC will identify and release the incoming chunks as they are not referenced in the memory after copied. The total memory used will be reduced to just the n chunks.

DotIN13 commented 11 months ago

I've also set up a minimal test case at https://github.com/DotIN13/zml-oom-example to track potential memory leaks.

htoooth commented 11 months ago

@DotIN13 thank you. You save my time. I will check it as soon as possible.

DotIN13 commented 11 months ago

It is also worth noticing that if I change the callback-based method from the above example into event-based, that is, using EventBus, the maximum number of 10KB files transferred before OOM becomes 13 instead of 7.

htoooth commented 10 months ago

I have solved the memory leak problem. But the current communication still cannot transmit large contents, and it seems that 10k is possible at present. @DotIN13

DotIN13 commented 10 months ago

Tested the latest version on my Balance. It just works! Was able to transmit 50 10kb packages in a single run. But the total transmission took 300 seconds, and I observed significant slowdown when it gets to aound the 20th package, so I guess the GC is still heavily stressed in the process.

htoooth commented 10 months ago

‌‌Yes, the garbage collection (GC) will be triggered when memory is insufficient. @DotIN13

DotIN13 commented 10 months ago

I did a few more tests, and found that my previous interpretation of the slowdown at around the 20th package is incorrect. The slowdown is actually caused by zepp app going into the background, which slashed the transfer rate in half. Tests done with zepp app in the foreground took around 180 seconds, while it could take up to 370 seconds when the phone app is in the background. I guess this might be a known issue, since Young mentioned the other day that the bluetooth transmission might be limited by phone vendors.

And my other finding does align with your observations. The watch still couldn't handle anything above 10k. In my case, I attempted to receive two 10k files in each Promise, and it would always go OOM after 6-7 files transmitted. And sadly this PR probably doesn't benefit this situation.

To sum up, as the leak has been revolved, the only thing that might be taking up the watch memory (which is approximately 1.2M I guess?) is the message handling process itself. But what kind of thing might be taking up such a huge space? We are talking about just 10kb packages here.

Anyways, this is indeed a breakthrough. Thanks for the efforts!

htoooth commented 10 months ago

@DotIN13
Yes , your are right. Ble transfer speed depends on mobile system status and Zepp App status.

The memory size of all js app is 3 * 1024 B which includes watchface's memory used.

If the memory usage of js program exceeds 3M, there will be an out of memory error.

Therefore, I suggest using it only for communication instructions and other data, and use the file transfer feature for large data.

DotIN13 commented 10 months ago

Okey dokey @htoooth , thanks for the tips. They are immensely useful. My emperical observations were exactly that the watchface and framework stuff usually take up 1.5M, leaving us around 1.5M to play with.

I'll migrate my map application to file downloads for now. Thanks again. Nice working with ya.