mailgun / kafka-pixy

gRPC/REST proxy for Kafka
Apache License 2.0
774 stars 118 forks source link

Update Version on DockerHub #173

Closed Antykor closed 5 years ago

Antykor commented 5 years ago

Hello,

All in the title ^^, I would like to know when the version of the DockerHub will be updated ?

horkhe commented 5 years ago

I wonder what particular feature do you need?

Antykor commented 5 years ago

I need the one related to #160 (HTTPS/SSL cert, add to the repo 3 months ago)

horkhe commented 5 years ago

I have rolled out a new release https://github.com/mailgun/kafka-pixy/releases/tag/v0.17.0. The latest version has also been uploaded to docker hub. Please feel free to reopen this issue if something is not working for you.

Antykor commented 5 years ago

Thx a lot ;)

Antykor commented 5 years ago

Hello @horkhe,

thx a lot for the new feature but i didn't understand the change on the clientID and i assume that my new error is coming from here:

/ # kafka-pixy --config usr/kafka-pixy-conf/config/default.yaml 
2019-07-22 13:39:12.230391 Z info "Starting with config: &{GRPCAddr:0.0.0.0:19091 TCPAddr:0.0.0.0:19092 UnixAddr: Proxies:map[default:0xc0000eec60] DefaultCluster:default TLS:{CertPath:/usr/kafka-pixy-conf/crt/server.crt KeyPath:/usr/kafka-pixy-conf/key/server.key}}" 
2019-07-22 13:39:12.230452 Z info "Initializing new client" category=sarama 
2019-07-22 13:39:12.230594 Z error "Failed to start service: err=(failed to spawn proxy, name=default: failed to create Kafka client: kafka: invalid configuration (ClientID is invalid))" 

do you have any ideas ?

horkhe commented 5 years ago

Hmm, it works for me. What is your Kafka version? Did you make any changes in the default.yaml?

Antykor commented 5 years ago

Im using this to spawn a kafka easily the problem may be that I'm trying to run kafka-pixy in a kubernetes pod? and this is my default.yaml

# TCP address that gRPC API server should listen on.
grpc_addr: 0.0.0.0:19091

# TCP address that RESTful API server should listen on.
tcp_addr: 0.0.0.0:19092

# Unix domain socket address that RESTful API server should listen on.
# Listening on a unix domain socket is disabled by default.
# unix_addr: "/var/run/kafka-pixy.sock"

# A map of cluster names to respective proxy configurations. The first proxy
# in the map is considered to be `default`. It is used in API calls that do not
# specify cluster name explicitly.
proxies:

  # Name of a Kafka+ZooKeeper cluster. The only requirement to the name is that
  # it should be unique in this config file. The value of this parameter is
  # a configuration of a proxy to access the cluster.
  default:

    # Unique ID that identifies a Kafka-Pixy instance in both ZooKeeper and
    # Kafka. It is automatically generated by default and it is recommended to
    # leave it like that.
    # client_id: AUTOGENERATED

    # Kafka parameters section.
    kafka:

      # List of seed Kafka peers that Kafka-Pixy should access to resolve the
      # Kafka cluster topology.
      seed_peers:
        - 192.168.100.129:9092

      # Version of the Kafka cluster. Supported versions are 0.10.2.1 - 2.0.0
      version: 0.10.2.1

    # Networking parameters section. These all pass through to sarama's
    # `config.Net` field.
    net:

      # How long to wait for the initial connection.
      dial_timeout: 30s

      # How long to wait for a response.
      read_timeout: 30s

      # How long to wait for a transmit.
      write_timeout: 30s

    # ZooKeeper parameters section.
    zoo_keeper:

      # List of seed ZooKeeper peers that Kafka-Pixy should access to resolve the
      # ZooKeeper cluster topology.
      seed_peers:
        - 192.168.100.129:2181

      # A root directory in ZooKeeper to store consumers data.
      # chroot: ""

      # ZooKeeper session timeout has to be a minimum of 2 times the tickTime
      # (as set in the server configuration) and a maximum of 20 times the
      # tickTime. The default ZooKeeper tickTime is 2 seconds.
      #
      # See http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkSessions
      session_timeout: 15s

    # Producer parameters section.
    producer:

      # Size of all buffered channels created by the producer module.
      channel_buffer_size: 4096

      # The maximum permitted size of a message (defaults to 1000000). Should be
      # set equal to or smaller than the broker's `message.max.bytes`.
      max_message_bytes: 1000000

      # The type of compression to use on messages. Allowed values are:
      # none, gzip, snappy, and lz4.
      compression: snappy

      # The best-effort number of bytes needed to trigger a flush.
      flush_bytes: 1048576

      # The best-effort frequency of flushes.
      flush_frequency: 500ms

      # How long to wait for the cluster to settle between retries.
      retry_backoff: 10s

      # The total number of times to retry sending a message before giving up.
      retry_max: 6

      # The level of acknowledgement reliability needed from the broker.
      # Allowed values are:
      #  * no_response:    the broker doesn't send any response, the TCP ACK
      #                    is all you get.
      #  * wait_for_local: the broker responds as soon as it commits to the
      #                    local disk.
      #  * wait_for_all:   the broker waits for all in-sync replicas to commit
      #                    before responding.
      required_acks: wait_for_all

      # Period of time that Kafka-Pixy should keep trying to submit buffered
      # messages to Kafka. It is recommended to make it large enough to survive
      # a ZooKeeper leader election in your setup.
      shutdown_timeout: 30s

      # How to assign incoming messages to a Kafka partition. Defaults to using
      # a hash of the specified message key, or random if the key is
      # unspecified. Allowed values are:
      #  * hash:       for messages with a key, take the FNV-1a hash of the
      #                bytes, modulus the number of partitions; otherwise use a
      #                random partition.
      #  * random:     all messages are published to a random partition.
      #  * roundrobin: iterate over partitions sequentially
      partitioner: hash

      # The timeout to specify on individual produce requests to the broker. The
      # broker will wait for replication to complete up to this duration before
      # returning an error.
      timeout: 10s

    # Consumer parameters section.
    consumer:

      # If set, Kafka-Pixy will not configure a consumer, and any attempts to
      # call the consumer APIs will return an error.
      disabled: false

      # Period of time that Kafka-Pixy should wait for an acknowledgement
      # before retrying.
      ack_timeout: 5m

      # Size of all buffered channels created by the consumer module.
      channel_buffer_size: 64

      # The number of bytes of messages to attempt to fetch for each
      # topic-partition in each fetch request. These bytes will be read into
      # memory for each partition, so this helps control the memory used by
      # the consumer. The fetch request size must be at least as large as
      # the maximum message size the server allows or else it is possible
      # for the producer to send messages larger than the consumer can fetch.
      fetch_max_bytes: 1048576

      # The maximum amount of time the server will block before answering
      # the fetch request if there isn't data immediately available.
      fetch_max_wait: 250ms

      # Consume request will wait at most this long until for a message from a
      # topic to become available before expiring.
      long_polling_timeout: 3s

      # The maximum number of unacknowledged messages allowed for a particular
      # group-topic-partition at a time. When this number is reached subsequent
      # consume requests will return long polling timeout errors, until some of
      # the pending messages are acknowledged.
      max_pending_messages: 300

      # The maximum number of retries Kafka-Pixy will make to offer an
      # unack message. Messages that exceeded the number of retries are
      # discarded by Kafka-Pixy and acknowledged in Kafka. Zero retries
      # means that messages will be offered just once.
      #
      # If you want Kafka-Pixy to retry indefinitely, then set this
      # parameter to -1.
      max_retries: -1

      # How frequently to commit offsets to Kafka.
      offsets_commit_interval: 500ms

      # If a request to a Kafka-Pixy fails for any reason, then it should wait this
      # long before retrying.
      retry_backoff: 500ms

      # Period of time that Kafka-Pixy should keep a subscription for a
      # topic by a group in absence of requests to from the consumer group.
      subscription_timeout: 15s

# Configuration for securely accessing the gRPC and web servers
tls:

  # Path to the server certificate file.
  # Required if using gRPC SSL/TLS or HTTPS.
  certificate_path: /usr/kafka-pixy-conf/crt/server.crt

  # Path to the server certificate key file.
  # Required if using gRPC SSL/TLS or HTTPS.
  key_path: /usr/kafka-pixy-conf/key/server.key
Antykor commented 5 years ago

I got it, i specified the client id in default.yaml and it works thx again for your work and your help ! TeamWork make the DreamWork !