Open bdarnell opened 11 years ago
HI, this doesn't seem resolved in 3.2 ... Correct? Still experiencing:
...tornado/iostream.py", line 478, in _read_to_buffer raise IOError("Reached maximum read buffer size") IOError: Reached maximum read buffer size
@AnthonyNystrom Have you got a way to reproduce the problem? The error you pasted is the whole stack trace? Can you provide more context?
For the bugs you referenced to be relevant, you should have your amqp connections explicitly closed during the lifespan of the process. That is somewhat unidiomatic for an amqp client process.
This is nonetheless a bug the need to be fixed on stormed or tornado side.
@AnthonyNystrom You are correct that this issue has not been fixed as of Tornado 3.2. However, this issue has nothing to do with the "maximum read buffer size" exception. That exception is more likely to be related to issue #772, which was just fixed in the master branch but was still a problem in version 3.2.
Commit c43996288e52a8f68728164f9284cf286d38543a clears the IOStream write buffer when the stream is closed. We should clear the read buffer as well, but there are currently tests that fail when that is done.
The issue is that we allow reads from buffered data after the underlying connection is closed. As long as one read leads directly (and synchronously) to another, the close callback is delayed. When there is a gap between the two reads, the close callback is run at that point, but we have some tests that rely on the availability of buffered data at this point. (We could perhaps declare these tests to be incorrect, but would need to offer some way for applications to indicate an intent to read when they need to preserve the buffered data).
There is additional discussion in #747, but note that the solution I mention there (acting as if there is a pending read when the buffer is non-empty but no read_callback is set) won't work because there is no guarantee that a read will come later (consider HTTPServer while a request is in flight)