vechain / connex.driver-nodejs

Has been moved to https://github.com/vechain/connex/tree/master/packages/driver
GNU Lesser General Public License v3.0
3 stars 2 forks source link

Socket & Http timeouts #5

Closed dvneverov closed 4 years ago

dvneverov commented 4 years ago

Hi! I'm using @vechain/connex.driver-nodejs": "^1.1.1" & @vechain/connex-framework": "^1.1.2" to connect to mainnet node. Sometimes struggle with innumerous timeout errors. Examples: headTracker(http): Error: timeout of 15000ms exceeded headTracker(http): Error: socket hang up headTracker(ws): Error: ws read timeout All of the points to @vechain/connex.driver-nodejs/dist/simple-net.js or @vechain/connex.driver-nodejs/dist/simple-websocket-reader.js The most strange thing that all of them occur already after successful request execution.

Here's the code of creating Connex instance for requests:

const net = new SimpleNet(netAddr);
const driver = await Driver.connect(net);
const client = new Framework(driver).thor;

What can cause the reason of timeout?

qianbin commented 4 years ago

@dvneverov There's a background loop, which to track best block using websocket, or http poll as fallback. Usually, these errors can be ignored.

dvneverov commented 4 years ago

@dvneverov There's a background loop, which to track best block using websocket, or http poll as fallback. Usually, these errors can be ignored.

But is there a way to avoid throwing this kind of errors? It's very annoying when there were too many connections, so error log grows too fast.

qianbin commented 4 years ago

@dvneverov a global option disableErrorLog has been added, please checkout the latest published package

dvneverov commented 4 years ago

@qianbin Hi! Sorry for late reply, we were checking it for a while. So the main problem was that we were connecting to our vechain node on each request and creating new SimpleNet and Framework instances each time. So disableErrorLog helped, but we catched node overload at one moment, cause thousands of tcp connections were made. So here's the question, if we will move connection to singleton (connect only once on start of our service), if connection to node will be lost, will it try to reconnect? And other question, is it possible to destroy connection after each request?

qianbin commented 4 years ago

@dvneverov the latest release already enables http keep-alive option. that's to say, tcp conns will be reused in normal case. you can check your server side, or the reverse proxy config.

dvneverov commented 4 years ago

@qianbin finally we moved connection to singleton, it works fine even if node is not responding and then starts to respond again. But if you use new connection every time you need to interact with blockchain, you can face the problem of overloading your node. Previous connections are not close after requests because of keep-alive. I think you should mention it somewhere, in Readme, for example.

qianbin commented 4 years ago

@dvneverov Ah, i misunderstood your reply before last. Every Driver instance has a background thread to track best block, so you need to explicitly invoke close method to prevent mem-leak, if your app create/abandon Driver instance frequently. However, singleton mode is recommended.

dvneverov commented 4 years ago

@qianbin yes, that's exactly what I was looking for. But it was quite hard to find this method in source code. And finally we figured out the problem with singleton implementation. Anyway, thanks. I think we can close the issue.