Closed puzrin closed 8 years ago
Have a look at example - https://github.com/sidorares/node-mysql2/blob/master/examples/mysqlproxy.js
Server side api is not documented very well, I'm happy to answer any questions (and maybe put answers to doco)
Server connection exposes "query" event with sql text as a parameter. There is no built in sql parser, you need to BYO
Example project which does some sql parsing: https://github.com/eugeneware/sql-engine ( leveldb with mysql frontend )
Note that mysql protocol is sequential, that is with single connection next query can be performed only after result from previous one received. Unless clients keep open new connections you won't get much queries queued
Thanks for pointing the directions to dig.
Since shinxsearch has very primitive requests (select/insert without nesting & transactions), simple RPC with JSON serialization to remote myslq2 (proxy) should solve the problem.
One more quick question - how stable is mysql2
1.0.0 RC11 for production? It would be interesting to migrate mysql
-> mysql2
. I've checked "known issues" in doc and ticket about encoding. Anything else i should know?
Unfortunately I personally don't have experience using this library in prod long running server type environment, mostly use in in one off scripts / batch jobs. I know there are users at NetApp, maybe some other big names, not sure. Overall it's less battle tested compared to mysql
but still 3k installs/day vs rate if issues reported suggest it's more or less stable. In addition to documented differences: timeouts are not supported ( connection and command execution timeouts ) Performance and memory consumption usually better, especially on big result sets. I hope to declare it "production ready" by releasing 1.0.0 in about a week if nothing major reported after rc11
Is it technically possible, that lack of timeout will cause infinite resource grow in real world? I mean, if sql backend crash and fail to restart and so on.
For example, i have a job queue that should push new forum posts to search engine (via sql interface). If sql request fails - that's ok, because new retry will happen after pause. But if sql request hangs for unpredictable time - that's not nice.
I don't mean cases when heavy request needs a lot of time to complete. Only a real disaster that should not fuckup everything else.
@puzrin yeah if you expect this might happen ( connection does not respond for a very long time but TCP socket is alive ) then 1) connection commands queue start growing if you use connection directly or 2) pool callback queue would start growing if limit is not set
It's still a useful feature, I'm keen to add it, but meanwhile you can have something like this:
pool.getConnection( (err, conn) => {
var timeout = setTimeout( () => {
conn.close(); // this should trigger removeFromPool - https://github.com/sidorares/node-mysql2/blob/b2010f99180d8b49eb7e9c2b6aeb8cb441362835/lib/pool_connection.js#L17-L21
}, 30000);
conn.query(sql, (err, res) => {
conn.release();
clearTimeout(timeout);
// use result;
});
});
@puzrin connectTimeout
support landed in master and going to be in 1.1.x soon
Thanks! We already migrated to mysql2 in forum convertor. General impression are:
This is not a feature request, just a question. I have sphinxsearch with sql interface, and need to do transparent search index rebuild on configs change. Idea is to create second daemon for pushing new data, but use old one for search request until reindexing complete. Then reroute all "read" requests to new daemon.
As far as i understand, it would be convenient to have proxy, that can properly route sql requests and hide details from application.
Is it possible to do something like this with
mysql2
? I mean, it would be nice to serialize sql requests to some intermediate format (~ like AST in DOM) instead of binary protocol. Or can such task (proxy with intelligent routing) be implemented with different approach?