libp2p / js-libp2p-websocket-star-rendezvous

The rendezvous service for libp2p-websocket-star enabled nodes meet and talk with each other
MIT License
24 stars 16 forks source link

fix: Leaks #16

Closed mkg20001 closed 4 years ago

mkg20001 commented 6 years ago

@VictorBjelkholm could you confirm that that solves the leak problem?

daviddias commented 6 years ago

@VictorBjelkholm any update?

victorb commented 6 years ago

Does not solve the issue. Forgot to update here but wrote a update on #ipfs-dev. Memory consumption seems a bit better but this PR does not completely solve the issue.

So the deployed ws-star service still dies every now and then.

mkg20001 commented 6 years ago

How much gigs of RAM does it need currently?

victorb commented 6 years ago

The limit of the process is 8GB currently but I'm sure it'll consume all the memory it can get.

mkg20001 commented 6 years ago

Hmmm... lets try with 40GB. I could host a server on some unused hosting capacity I have available at ZionHost. Would it be possible to do that experiment?

victorb commented 6 years ago

The server has 16GB so we could try that, but doesn't really solve the problem, it will die at one point or another as memory is not infinitive. Hopefully @pgte could maybe share some memory-leak debugging tips and we can have the leak fixed.

Also something I've noticed with just 8GB, the more memory that is being used, the slower the process runs as well. So not sure if even having it use 8GB is feasible.

mkg20001 commented 6 years ago

Server up and running on host4.zion.host:9090

pgte commented 6 years ago

@VictorBjelkholm yes, that tends to happen in Node and JS in general. The more memory is occupied, the slower the process, since the amount of memory that needs to be scanned increases.

One can get a proof of this by instrumenting the code (0x can be useful here), generating some load (if the load is HTTP, load is usually generated either using a custom script, an HTTP benchmarking tool, autocannon, artillery or similar) and then generate server-side flame graphs (which are generated from the sampled stack traces). If the memory pressure is relevant, you should see V8's garbage collection red in the flame graph.

Once memory pressure is proved, then next step would be to detect the leak by comparing heap dumps taken before and after the load. I usually take one heap dump on a freshly started process, and then another heap dump after being under load.

To generate heap dumps you can use Chrome dev tools using the node.js --inspect flag or, programatically using something like heapdump.

Then you can compare them using Chrome dev tools comparison view.

Hope this helps!

mkg20001 commented 6 years ago

@pgte My problem is rather that from the data I get in the chrome dev tools does not tell me which exact part of the code is causing the leak (basically which var has a circular reference that prevents it from being free'd)

pgte commented 6 years ago

@mkg20001 sorry, missed your comment. Yes, after knowing which objects are leaking, now comes the tricky part of trying to understand why it's not being dereferenced, and for that we don't have any automated tools :)

jacobheun commented 4 years ago

Closing as this repo is deprecated and being archived.