Open bjconlan opened 2 years ago
can also verify db.end() does resolve the issue on the mysql2 end (but ideally I'd like a single connection for this behaviour to prevent the temp table from being removed). In any case this looks to be an upstream related and not @database/mysql but perhaps there are workarounds that smarter minds can solve at the this library level.
I did just migrate this code over to use the mysql2 specific pool.query().streams()
and am not seeing the issue. perhaps this is something in regards to the pool cleanup code? (after moving to the mysql2 query directly pool.end() seems to work as expected (replaced the @database/pg pool.dispose)
If you're able to submit a PR with a failing test (or better yet, a newly passing test along with the bug fix). I'd be happy to have a look at this. I'm not sure I have the bandwidth to address this without that.
I'm currently dumping sql databases tables using the queryNodeStream and im seeing inconsistent results (perhaps the 'end' chunk sent by the mysql2 driver isn't correctly being handled?)
In anycase this is an example of the query (note the 'export_users' table a temporary table which just holds the ids of users who need to be dumped (ie single connection) NOTE: this is the second node
pipeline
call and non-streaming query just used to validate numbersAnd the results reflect
And the dumped csv results only show 1792 result rows (1793 including csv headers) which exactly the number of results missing outlined form the 'query' based result.
NOTE the 1st pipeline dump executes successfully (only sequential seem to have this problem and looks like there are many open issues in mysql2 regarding this).