Closed miaulightouch closed 4 months ago
What are you looking at when you say "the memory usage is continually increasing"?
I can't reproduce the memory leak with your repository. This is what I get after a few minutes:
$ ps -o pid,%cpu,%mem,vsize,rss,cmd -p 9852
PID %CPU %MEM VSZ RSS CMD
9852 0.0 0.1 11780976 60964 node index.js
And after a couple hours running:
$ ps -o pid,%cpu,%mem,vsize,rss,cmd -p 9852
PID %CPU %MEM VSZ RSS CMD
9852 0.0 0.1 11780976 60896 node index.js
Sorry, please re-pull the repository and try again. I forgot to handle the error and establish a reconnection.
here's the screenshot from docker desktop
rebuilt the lib-datachannel
in node:lts-alpine
, still get the same result.
I still can't reproduce with docker compose, even after pulling the last code.
However, such a behavior is expected if you create new peer connections without closing existing ones. On offer, you need to close the existing peer connection.
I still can't reproduce with docker compose, even after pulling the last code.
However, such a behavior is expected if you create new peer connections without closing existing ones. On offer, you need to close the existing peer connection.
Yes, I know. It's a minimal example code; I removed a lot of checks to make it easy to understand.
It also happened in Kubernetes, using the same Dockerfile in the example repository to build the image.
Do you close the existing peer connection when re-creating one?
class Client {
onOffer(data: OmeOfferResponse) {
this.id = data.id;
this.peer_id = data.peer_id;
if (this.peer) this.peer.close();
this.peer = new PeerConnection(this.id.toString(), data.iceServers);
this.peer.setRemoteDescription(data.sdp.sdp, data.sdp.type as DescriptionType);
//...
}
}
Yes
OK, I can reproduce, I forgot docker compose did not rebuild automatically.
So what happens is that you don't set onMessage
callbacks to receive the media data, so it queues, which creates the memory leak. Setting callbacks on videoTrack
and audioTrack
won't fix anything as those tracks are useless on answerer side, you need to receive offered tracks with the onTrack
callback instead. Then, set onMessage
callbacks on received tracks and check that you get data.
Thank you, I see no more memory leaks in my test environment. I think this issue can be closed.
Before closing this issue, could this example also have this issue? https://github.com/murat-dogan/node-datachannel/blob/master/examples/media/media.js
Before closing this issue, could this example also have this issue? https://github.com/murat-dogan/node-datachannel/blob/master/examples/media/media.js
No, because this example offers to the browser, so it adds tracks and offers them, while the browser receives them (the direction of the tracks themselves does not matter).
I appreciate your explanation. I now understand how this library works, and it is awesome.
@paullouisageneau
Sorry for bothering you again. Registering onMessage
in track from the onTrack
event solved my problem. However, onMessage
receives data for a few seconds and then stops. How can I fix this?
The connection is not disconnected.
I may have found the bug, but I'm not sure.
If I keep calling track.isOpen()
, the messages won't suddenly stop.
Please check that your variable is not garbage collected. I mean not structured as a local variable.
Please check that your variable is not garbage collected. I mean not structured as a local variable.
it works!
I am attempting to use node-datachannel to implement WebRTC for benchmarking purposes. I've noticed a gradual memory leak, which becomes more pronounced when increasing the number of clients from one to 100.
Here is my reproduction repository:
simply run
docker compose up -d
, and you will observe the memory usage continually increasing.