Closed LuKks closed 1 year ago
Just in case, I would say that if you reverse the order of the destroy, ending like this:
await node1.destroy() // server
await node2.destroy() // clients
If you wait a few seconds, it give you this:
/home/lucas/Desktop/aa/node_modules/udx-native/lib/stream.js:262
const err = new Error(msg)
^
Error: ETIMEDOUT: stream timed out
at UDXStream._onclose (/home/lucas/Desktop/aa/node_modules/udx-native/lib/stream.js:262:17)
Emitted 'error' event on NoiseSecretStream instance at:
at WritableState.afterDestroy (/home/lucas/Desktop/aa/node_modules/streamx/index.js:444:19)
at NoiseSecretStream._destroy (/home/lucas/Desktop/aa/node_modules/@hyperswarm/secret-stream/index.js:463:5)
at WritableState.updateNonPrimary (/home/lucas/Desktop/aa/node_modules/streamx/index.js:189:16)
at WritableState.update (/home/lucas/Desktop/aa/node_modules/streamx/index.js:174:70)
at WritableState.updateWriteNT (/home/lucas/Desktop/aa/node_modules/streamx/index.js:484:8)
at processTicksAndRejections (node:internal/process/task_queues:78:11) {
errno: -110,
code: 'ETIMEDOUT'
}
I think this obviously could be correct behaviour, because if you first destroy node1
(server), it makes sense for a socket from node2
(client) to timeout. That's why originally node2
is destroyed first, but in v6 hangs as previously said.
If you add socket.destroy()
at the end of the loop, like this:
// clients
for (let i = 0; i < 3; i++) {
// ...
console.log('opened in', Date.now() - time, 'ms')
socket.destroy()
}
In v5 works but v6 gives:
opened in 1605 ms
/home/lucas/Desktop/aa/node_modules/udx-native/lib/stream.js:262
const err = new Error(msg)
^
Error: ECONNRESET: stream destroyed by remote
at UDXStream._onclose (/home/lucas/Desktop/aa/node_modules/udx-native/lib/stream.js:262:17)
Emitted 'error' event on NoiseSecretStream instance at:
at WritableState.afterDestroy (/home/lucas/Desktop/aa/node_modules/streamx/index.js:444:19)
at NoiseSecretStream._destroy (/home/lucas/Desktop/aa/node_modules/@hyperswarm/secret-stream/index.js:463:5)
at WritableState.updateNonPrimary (/home/lucas/Desktop/aa/node_modules/streamx/index.js:189:16)
at WritableState.update (/home/lucas/Desktop/aa/node_modules/streamx/index.js:174:70)
at WritableState.updateWriteNT (/home/lucas/Desktop/aa/node_modules/streamx/index.js:484:8)
at processTicksAndRejections (node:internal/process/task_queues:78:11) {
errno: -104,
code: 'ECONNRESET'
}
I had made this code when internal connection caching was implemented: https://gist.github.com/LuKks/6acc2b9eb0420ff70a5373fa1289122f
Small conversation here: https://discord.com/channels/709519409932140575/709522119335346196/949735178010501190
Using v6, the
destroy()
part hangs. And using v5 works.Same gist with:
npm i @hyperswarm/dht@5
npm i @hyperswarm/dht@6
Besides this behaviour, I was trying to check if all connections are now internally cached, including local ones.