NodeRedis / node-redis-parser

A high performance Redis protocol (RESP) parser for JavaScript. Used by Node Redis & ioredis.
MIT License
88 stars 36 forks source link

v.2.0.1 proposal #5

Closed BridgeAR closed 8 years ago

BridgeAR commented 8 years ago

The bufferPool is not working correct currently. The issue is that slice does not copy the buffer but returns a reference only.

Using a individual bufferPool per parser is not an option, as it would increase the memory usage significantly in case someone uses many clients (there are people who reach the maximum clients limit and in such a case this would mean a huge memory usage).

Instead I'm thinking of copying the part that we need for the next chunk. @Salakar can you think of any other solution?

BridgeAR commented 8 years ago

@Salakar would you mind looking at the code again? I don't like my fix (and the code is ugly) but I have the feeling it's pretty much the best solution in this case. Maybe you could come up with a nicer way to write this though?

jbt commented 8 years ago

This definitely seems to fix up the issues I was seeing over on gosquared/redis-clustr#10, nice one :+1:

Salakar commented 8 years ago

@BridgeAR can't think of a better way at the moment, per parser pool would of probably been the best solution performance wise, but as you say the mem usage would be high on people using a high number of clients

In fairness no one should really have the need to be maxing out the number of clients, at most it should be 3-4, 1 client for commands, 1 for pub, 1 for sub and a couple blocking clients, I've seen a lot of these these repos with high client numbers going to the same redis connection doing so in the hopes of "increasing performance" but considering most of redis ops are atomic and that the parser is synchronous blocking this would make no diff in performance if not make it worse even. But that's not our issue... rant over haha 😅

Does this hit the perf hard then, I imagine so?

BridgeAR commented 8 years ago

@Salakar in real live code this does not have a huge impact as the default is not incomplete data and we only copy minimal data. There are ways to optimize this penalty in almost all cases away but it's more complex and currently I do not have the time to improve this further.