Closed Climbgunks closed 9 months ago
Thanks for the thoughtful report.
It seems that when the first 2 bytes are ffff, it means UNKNOWN length and the data size must be determined somewhere in the subsequent data.
I was able to reproduce it with derby.
I still don't know what to do to fix it. I'm trying to look at the derby source code to find out. I wonder if anyone has any information.
Here is some additional detail w/ hex dumps. I think I understand what happens if the data (>32k-ish) fits within 2 32k chunks, but haven't implemented anything yet. For larger replies, I need to understand how additional reads are triggered. Note, in all the dumps below, I emptied the socket -- so in the >64k case, it's waiting for a trigger from us.
Not sure how to produce mono-spaced text, so I've included it as an attachment notes.tgz
Since the issue has not been resolved in derby, I will leave the issue as it is.
But since it has been resolved in db2, I have released ver 0.5.0 for now.
It now works in derby too. I've messed up running the CNTQRY command, so I'd like to sort it out sometime.
I am connecting to a DB2 database and selecting N rows from a CUSTOMER table. For large N, this hangs in ddm.py:_recv_from_sock().
In my case the hang occurs when LIMIT N > 122. When I use LIMIT 122, the header is bytes 7f7cd0530002, which is the correct length of 32636 and _recv_from_sock() reads the additional 32630 bytes as it should. All rows are approximately the same size.
For X = 123: The DDS header (6 bytes) looks like bytes ffffd0530002 Treating the first 2 bytes as the length gives 65535, but this is incorrect. _recv_from_sock() hangs after 33025 bytes
Note that there is nothing in the data for row 123 that is causing the issue, because I can pull it by itself or in a group of rows with an appropriate WHERE clause.