Closed ouvaa closed 3 months ago
feel free to open issues. what do you mean exactly by "pipelining"? or just tell me that what you want to implement, need the details.
https://redis.io/docs/manual/pipelining/ https://github.com/redis/rueidis?tab=readme-ov-file#pipelining
the tcp client of redis in golang is very inefficient. i would like to write an extremely simple tcp tls transfer client to server as a proxy / reverse proxy. will implement both proxy client and backend server in nbio
do u have anything already done? pipelined request to server tls "rpc" is best. basically just want to optimally use client-server connection.
can i sponsor / donate something for this?
if u have it done then i wont need to do this i guess. a lot of people wont be needing to do this. nbio can save on mem, increase transmission securely.
currently netpoll, gnet, evio does not support tls. rueidis client is not performing well.
Thank you for your kindness to be a sponsor, but I have not opened the sponsor. And the most important, as I said, according to my benchmark, when there are not a huge num of connections, nbio's non-blocking mod connection doesn't perform better than std, usually a little slower than std connection. The redis client is just the scenario of not many connections. So I think we wouldn't gain a better performance by using nbio, unless the official redis cleint's implementation is too bad in performance. Another point, your solution imports a reserve proxy between the server and clients, that makes the transportation longer and costs more time.
btw, maybe you are interested in golang rpc, cause I see u open issue in herz repo and mentioned rpc performance... maybe you should try this: https://github.com/lesismal/kitex-benchmark
most importantly, run the test yourself, don't just believe any repo author's claim of performance, repo authors should not be both an athlete and a referee, that's not fair...
another point, kitex-benchmark is not enough, cause it just test a few connections(2, 10), and different frameworks maybe different num of connections. if only such a few connections only, I would suggest to implement the rpc-func-logic in the service who hold the rpc-cleint then no need to make the rpc call, no need to implement such rpc-servers.
@lesismal it does not support tls. do u have any workarounds for the kitex one? i was looking at it too but only yours support tls
because i need tls, i was wanting to go back to std go net but then i benchmarked and realise why all of you wants to develop your own net lib. gnet, evio, pain, netpoll etc.
it does not support tls. do u have any workarounds for the kitex one?
I don't have solutions for kitex. I write and use my repo: arpc, you can try the benchmark code and see whether kitex is faster than arpc, or whether arpc is faster than kitex. In my own test, using kitex-benchmark code, arpc is faster. But different env maybe different result, so I always tell people to run the benchmark themselves, rather than give a report from myself and claim that my lib is the fastest, cause I got much different result from what the authors claimed. BTW, arpc is not only an RPC framework, but supports more features, you can use it to implement different kinds of business: IM, Gaming, Pushing Service, etc.
because i need tls, i was wanting to go back to std go net but then i benchmarked and realise why all of you wants to develop your own net lib.
It depends. If each of your service node serves a lot of connections, nbio may work better than std. So, how many connections does your service serve on each node?
@lesismal i understand concurrent connection serving etc. it's not the server side solution i'm implementing but the client side.
server side i'm using hertz / evio (depending on service range) they perform great in their own area, http2, custom redis server.
however i'm having issue with client side connectivity now within a reverse proxy system, something along the lines of this: https://blog.cloudflare.com/how-to-stop-running-out-of-ephemeral-ports-and-start-to-love-long-lived-connections/
it's a bit pre mature optimization but i need it for future proofing, not wanting to come back to this issue again.
so the requirement is:
would like to write my own client side proxy to be faster, and using 1 connection if possible. otherwise, it's fine coz tls is more important. i've tested most and all of them (other than gnet without tls) cant fully utilize multi core.
imagine this
using 128 core processor with tls user (unlimited reqs) -> proxy server listening (80k req/s per core [up to 10,240k req/s]) -> proxy server processing -> proxy server's client outgoing (260k req/s in 128 core processor [for example]) -> backend
in practise 10,240k should be around an estimated 7,680k req/s (25% loss due golang's multicore inefficiency).
so pls help me with this final client proxy side i need for tls will do.
1 connection will be best like the cloudflare implementation.
would you please provide a full example repo that can test your scenario? then I can understand your need better and I'd like to try and make some pr to your repo.
@lesismal my repo is not ready to be published, i need to finish this client proxy first before anything else. there is too much code need to refactor for a public repo.
i realised i may be asking for too much for 1 connection client connectivity because cloudflare patched with connectx. not sure if you have alternatives etc BUT the current issue is still the client side tls processing.
but since this issue is about pipelining, if you have ways to suggest pipelining do mention. thanks.
if for 1 connection, I would suggest using std connection and implement send like this: https://github.com/lesismal/arpc/blob/master/client.go#L929
@lesismal does it support tls? i will use it if it does.
@lesismal does it support tls? i will use it if it does.
I mean you use std tls, std tls.Conn supports of course. the code of the URL below is an implementation of my arpc async write, it combines multiple buffers and do the same work as writev does to reduce syscall. https://github.com/lesismal/arpc/blob/master/client.go#L929
totally newbie on setting up tls for arpc, can u provide the code example? i'm waiting on this issue coz i think arpc tls will solve most of the problems i have. will report the findings to u
when you use a goroutin+chan to implement async write, the chan is the same as a queue.
when you handle the buffers/messages in the queue, you can copy muti buffers to a bigger buffer and write the single big buffer to a conn, if the conn is tls.Conn, it reduce the times encrypt/decrypt times and syscall.Write. that can gain better tps.
totally newbie on setting up tls for arpc, can u provide the code example? i'm waiting on this issue coz i think arpc tls will solve most of the problems i have. will report the findings to u
sorry to say that, I can not spend too much time to provide full implementation, because that costs a lot of time for the communication and implementation details. I can provide consulting services but need a contract and formal payment.
@lesismal i understand. how much would the pricing be for the implementation of this arpc with tls support? how long does it take?
sorry, I am a little confused, what do you mean by implementation of this arpc
? if you just want arpc, it already supported tls:
https://github.com/lesismal/arpc/tree/master/examples/protocols/tls
first of all, please make clear what you want.
it's better if you read the code and implement it yourself, then we don't need to make the contract, because I am not lack of money at present and don't want to be too tired. the most important, I am not sure whether that optimization can satisfy your need. the price of a contract with a company would be 2k-10k USD per day, the price per day depends the type of service and how long of the period, it's too much if you want to pay it yourself.
I wouldn't provide so much user function customizing help, I'd close this and some other issues.
can you do an example of tcp server and client with pipelining?
i have no idea how pipelining can be implemented. can you do one? i can buy you coffees for this