Open gsstoykov opened 1 week ago
Could be due to limited resource, I tried with 4 nodes, 8 nodes on my 64GB macbook not seeing any failure yet.
Here is docker configuration
Just tried with 13 nodes, now I can reproduce the error
↓ Enable port forwarding for JVM debugger
❯ Check nodes are ACTIVE
⠼ Check network pod: node1 - status TIMEOUT, attempt 87/120
⠼ Check network pod: node2 - status TIMEOUT, attempt 88/120
✔ Check network pod: node3 - status ACTIVE, attempt: 24/120 [35s]
⠼ Check network pod: node4 - status TIMEOUT, attempt 88/120
✔ Check network pod: node5 - status ACTIVE, attempt: 23/120 [35s]
⠼ Check network pod: node6 - status TIMEOUT, attempt 89/120
✔ Check network pod: node7 - status ACTIVE, attempt: 23/120 [36s]
⠼ Check network pod: node8 - status TIMEOUT, attempt 88/120
⠼ Check network pod: node9 - status TIMEOUT, attempt 89/120
⠼ Check network pod: node10 - status TIMEOUT, attempt 87/120
⠼ Check network pod: node11 - status TIMEOUT, attempt 86/120
⠼ Check network pod: node12 - status TIMEOUT, attempt 86/120
⠼ Check network pod: node13 - status TIMEOUT, attempt 87/120
◼ Check node proxies are ACTIVE
◼ Add node stakes
node:internal/process/promises:289
triggerUncaughtException(err, true /* fromPromise */);
^
[UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason "#<ErrorEvent>".] {
code: 'ERR_UNHANDLED_REJECTION'
}
So we can say that this error is expected?
So we can say that this error is expected?
Yes , if the host machine has limited resources
To Reproduce
Describe the bug
It looks like the chance of failing start increases with increased node count.
Describe the expected behavior
Would expect solo nodes to run successfully.
Whole JUnit/CLI Logs
Additional Context
No response