Open funny-falcon opened 12 years ago
I have not looked at the PULL/PUSH mechanism in this code yet but I don't see a way for it to work other than maintain connection identity in the request and response headers to and from the upstream worker.
browser ------> nginx zmq --- [connection id: 1] --- [PULL] ---> worker
^ |
| |
+--------- [connection id: 1] --- [PUSH] ------+
@deepfryed , I agree with you, more over: we ought to pass nginx's PULL socket address if we wish one worker to serve several nginx
sounds like over-engineering - i'd rather put a load balancer in the middle instead
Hi guys, sorry for the late reply.
Are zmq sockets integrated to nginx event loop? I believe so, but still want to get definite answer.
Yes.
Could you provide example config where both PUSH and PULL configured for upstream?
upstream {
zeromq_remote PUSH 127.0.0.1:10000;
zeromq_local PULL 127.0.0.1:*;
}
Will nginx will detect message incoming to PULL socket as an answer for message sent into PUSH socket? How does he do it if so?
For now, all sockets are transient per-request, so there is no problem with that.
This will obviously change (it's still very much work in progress) and then you will need to pass identification header that came with the request.
Which way client should determine address of PULL socket configured in Nginx?
There is X-ZeroMQ-RespondTo
header in the request, which contains that information.
sounds like over-engineering - i'd rather put a load balancer in the middle instead
For simple request/response - agreed, but that's not the point of this module.
@PiotrSikora, it sounds very promising :) Will take an eye on this project
This will obviously change (it's still very much work in progress) and then you will need to pass identification header that came with the request.
Then there could be bound PULL socket and connected-to-many PUSH socket - simple load balancer with fail tolerance enabled :) - had I got the idea?
@PiotrSikora re. over-engineering, i meant the single worker serving multiple nginx instances workflow.
@PiotrSikora re. over-engineering, i meant the single worker serving multiple nginx instances workflow.
@deepfryed, I'm thinking about multiple workers serving multiple nginx instances, so that fail of any worker or fail of any nginx doesn't break the whole thing. Any nginx's request will PULLed by any alive worker using balancing property of PUSH-PULL chain, and worker will PUSH reply, marked by request handle, to nginx's PULL socket, which address also will be encoded in a request.
Good day.
First of all, excuse me for asking the question instead of reading sources.
With respect, Yura