Open dadepo opened 3 years ago
It looks like this error is somewhat related to workers. I have a script that tweets stuff which works perfectly fine when I run it alone. But when I moved the logic into a function and have it called within a worker, it fails with the following error:
error: Uncaught (in worker "") [object Array]
error: Uncaught (in promise) Error: Unhandled error event reached main worker.
at Worker.#poll (deno:runtime/js/11_workers.js:288:21)
Thought to mention this.
Okay, so it seems not to be specifically about running the code in a worker. Poking around a bit I realise the code was actually throwing an exception...but this exception never surfaced when it occurs within the context of a worker...but a generic exception about unhandled error is surfaced.
~I am seeing something similar: https://github.com/denoland/deno/issues/20169~ False alarm
I ran into this issue, but was not using any workers. I was making a lot of fetch()
requests to some endpoints which responded with a decently sized response (couple of mb per response). I was not doing anything with the body though, I was only checking the status code.
In some completely unrelated piece of code I was fetching json data which had a response of not even 1kb. That's where I was seeing the error reading a body from connection
errors. The promise from response.json()
would hang for a very long time before finally rejecting with that error. In the end what fixed it for me is to cache the unrelated frequent requests and properly use an abort controller to cancel the request after I had gotten the status code.
Anyway I hope this helps someone, in my case it just meant I was downloading too much data.
I have a deno app running within a container that keeps crashing after some time with the following:
As can be seen, no mention of application code in the trace, so it is difficult to troubleshoot what might be causing this.
The application makes use of workers and it is first bundled up via
deno bundle
before deploying. I suspect the error is related to the workers as traces of this can be seen in the trace.This was the case while running with version
1.8.2
and also when I upgraded to version1.9.0