Open chuang39 opened 8 years ago
@chuang39 excessively sparse array
is from cjson[1]. May be your brokerid is not continuous.
I think you need pairs
instead cjson.encode to dump brokers
, like:
for brokerid, host in pairs(brokers) do
ngx.say(brokerid, cjson.encode(host))
end
I'm not sure what's your dump code in send_receive
, but is seems unreasonable to me.
Aha, you're right. It worked now. Btw, fetch_metadata communicates with Kafka in a pre-definced stream format. I was wondering where it is defined? Could you kindly give a pointer to that?
Also, one issue I am having is that the metadata returned from broker contains hostname like node1.domain.com in host. Nginx cannot resolve it and return following error: send err:no resolver defined to resolve "nhhad12.fractionalmedia.com" One solution I can think out is that I read /etc/host in advance and cache it in a table somewhere. Wondering if you have any suggestion on this? Thanks!
Found the protocol where defines the message format. https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
I'll check if I can change kafka broker metadata to make it return ip rather than hostname. Thanks!
@chuang39 You need to config host.name=IP in your kafka server config. As far as I know this is the only way to let kafka return ip.
I added the request handler as example in README. However, fetch_metadata() seemed to return empty result which caused the error.
local cli = client:new(broker_list) local brokers, partitions = cli:fetch_metadata("exp_khuang") ngx.say("brokers: ", cjson.encode(brokers), "; partitions: ", cjson.encode(partitions)) -- Error thrown here.
I noticed the traceback is like client:fetch_metadata->broker:send_receive(). I added some dump in send_receive(). The request:packet() sent to kafka broker node is "true", and received data is nil. I am not sure where it goes wrong. Could anyone kindly suggest? Thanks a lot!