codesandbox / codesandbox-client

An online IDE for rapid web development
https://codesandbox.io
Other
13.09k stars 2.28k forks source link

As I develop an http server, I have to wait until the next day to try again #6100

Open NickCarducci opened 3 years ago

NickCarducci commented 3 years ago

Can you help me call http.createServer more than once a day? While developing, I may have started the server without calling end. I gather I need to from terminal or programmatically send a new request or potentially kill -9 8080, but that isn't a recognized command. Sometimes it seems to retry request regardless of nodemon or node, when I would rather it not, to debug!

I have logs in my server that show when the server runs/is-created, to which if it fails it doesn't try to http.createServer until the next day.

I am trying to make a React server with rollup, without webpack nor express. I've made the bundled scripts.js, and the respective index.html but am experimenting between createReadStream and readFileSync. Whenever it fails to write end readFileSync or pipe the Buffer.concat(chunks) from readable file back to the client, it won't run http.createServer again until the next day. Restarting the sandbox nor server, nor pressing the + for /bin/bash and typing rm -rf scripts.js && rm -rf scripts.js.map && rm -rf .cache && rm -rf node_modules && rm -rf yarn-error.log && rm -rf yarn.lock && yarn allows http.createServer to be called again. Morning seems to be the time to retry, but one or two times I was able to retry at night. This happens with nodemon src/index.js instead of node src/index.js as well.

https://codesandbox.io/s/dark-firefly-yd9vi?file=/netlify/functions/src/index.js

...
const server = http.createServer();
server.on("error", (e) => console.log(e.message));
server.on("request", routeOrChunk);
server.listen(8080);

I wrote this on rollup's github describing the need for this example here https://github.com/rollup/rollup/issues/4075#issuecomment-904004146

lbogdan commented 3 years ago

Hey @NickCarducci ,

Thanks for your report! Can you please provide a list of exact steps to reproduce, what the observed behavior is, and what the expected one is?

NickCarducci commented 3 years ago

Tries at reproducing: Thanks @lbogdan, tried as I did throughout the past few days-week - it was hard to isolate reproductive steps to which it wouldn't reach inside the request again until the next morning. The only sure thing is the type of process to cause the bug; that is, calling http.createServer with a coding-error in request, it may even retry on another port. I cannot reproduce the bug when not calling res.end() (it should console.log("inside request"), which it does in this example), so I guess it is something else. Why, for instance, this fellow seemed to wait the same amount of time with an error I often got during this development process?

Where I am at now: Fortunately, but not for this diagnosis, the linked example SSE without webpack is FINALLY setup appropriately in regards to this issue, and successfully-able to send "Content-Type": "text/html" to the client random port; here are my examples for parcel and add-react-to-a-website which I am now working on beyond this issue, too, to load the script before sending the html.

Next steps for this issue specifically: So, is this a computer responsibility, where the process timeout we seem to be encountering, and waiting to cleanup itself, (or shutdown, I think that is what I did to make it work in the late evening) is made by our local-device? This absolutely stopped happening after completing the request, shutting-down, or waiting until morning, but that is as good as correlating event/population rate of change, not sufficiently a controlled-positive, without knowing what caused the issue: as to what I did inside the createServer request that made a new request to not be listened for, or even sent at all. Feature: Maybe a button to cleanup some unwritten something on our local devices to mimic shutting-down to retry our servers. I guess I should look at your open-source code for client to see what is causing this something myself, like a cache! Expectations: I might be selfish and move-on, but I imagine guess and checkers like me encounter this timeout when they mess up request as well.. That would expected to be reset upon restarting the server and/or sandbox. It certainly isn't checking for redundancy, because the request would be rerun/relistened-for in the morning with the same code.

FossPrime commented 3 years ago

Have you tried pkill -f "yarn" to end the node process? I've had the Restart Server button fail to restart the server due to a stuck process. Running kill -9 on the culprit PID worked to get it unstuck.

NickCarducci commented 3 years ago

I haven't, by the way I've formed the habit to Restart Sandbox and Restart Server, seriatim.

No, I couldn't reproduce on-demand the issue and the working-sandbox I found the problem in now is happy; it wasn't not ending process with res.end() alone, for instance. So, I am not sure if I should close this issue, especially for this is the second time I experienced this process, the other in a front-end react sandbox (though I seem to recall it not taking so long as until the next morning, putting a iife function between render and return). I'm moving forward assuming it can be chalked in the realm of a local http.createServer problem that set off a need for waiting of a (1)"timeout clear" on my computer or (2) "timeout clear" somewhere in the codesandbox-client product, with preference to (1)

FossPrime commented 3 years ago

Update... The plot thickens... it appears we can somehow get into situations where we have Dark-bytes in our /sandbox...

Here's a normal du -hd1 of the koajs-example-starter

sandbox@sse-sandbox-3dygk:/sandbox$ du -hd 1
5.9M    ./node_modules
6.0M    .

Here's a server SAML authentication project I've been working on for about a couple weeks:

sandbox@sse-sandbox-bnvht:/sandbox$ du -hd1
84K     ./src
4.0K    ./.codesandbox
36K     ./config
176K    ./public
12K     ./test
4.0K    ./trash
24K     ./modules
12K     ./docs
813M    .

The culprit seems to be one huge hidden file called, core. This file is not in the Koa project... curious. I deleted it and I'm back to normal.

-rw-------  1 sandbox sandbox 812M Sep 29 11:20 core

I had a similar issue yesterday after using a Boilerplate generator. I just deleted the sandbox as I gave up trying to fix it. It seemed like package.json was possessed. I couldn't save it, change it, it would fail to sync and open on it's own... restarting didn't help. I could have also just mangled the container beyond repair. Copying everything to a new container worked.

Today I noticed I seem to be hitting the 1016MiB limit on the size of /sandbox. You can check if that's your issue by running df -h on the terminal. Deleting node_modules seems to have helped... Just keep an eye on it as you install dependencies. Switching versions can leave artifacts behind and inflate the size of it.