Closed lithiumlab closed 8 years ago
You're right that you need to call close()
. Your problem is that you're iterating over the streams
dict, which gets updated with each call to close()
. So what you need to do is this:
stream_ids = list(connection.streams)
for stream_id in stream_ids:
connection.streams[stream].close()
That way you avoid the issue.
As to EPIPE, you shouldn't ignore it: you should just stop processing once you hit it. EPIPE is raised when the remote peer has closed the connection but we haven't processed it yet, and instead tried to write to the socket. That means there's no point in doing graceful shutdown: the connection is gone anyway. That means that if EPIPE is raised, you should just consider your shutdown process complete (maybe logging that you couldn't finish graceful shutdown because the remote peer got in your face).
The application doesn't require some of those requests bodies to be read. Only send and consider OK with a 204 (No content). So i'm only doing .read() on less than 50% of them.
[DEBUG] (Thread-10 ) recv for stream 27 with set([1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, <any-number-here>, 255 ]) already present
Something like this?
Getting this:
This is what im trying to avoid:
Any alternatives or suggestions?
It looks like https://github.com/Lukasa/hyper/issues/264 is related
Another related question is:
This is done in the front-end calling
xhr.abort()
and then python raises the error in the logs. How should i capture it as gracefully as possible and pass or should i be considering not to?. I'm using Flask in the the frontend. The logs error doesn't seem be directly generated from the app code and i haven't worked enough with sockets and inner parts of the request process to feel confident. Any suggestions on how to approach this?Nothing comes in the ouput with this: