justonedb / kafka-sink-pg-json

Kafka sink connector for streaming messages to PostgreSQL
MIT License
91 stars 31 forks source link

No current assignment for partition #19

Open abhishekmaheshwari86 opened 6 years ago

abhishekmaheshwari86 commented 6 years ago

Hi All,

i am using below kafka driver version:

org.apache.kafka kafka-clients 1.1.0
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-streams</artifactId>
        <version>1.1.0</version>
    </dependency>

with below driver i am facing below issue

java.lang.IllegalStateException: No current assignment for partition METRICS-METRICS_STORE_EXPERIMENT-changelog-15 at org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:259) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.SubscriptionState.resetFailed(SubscriptionState.java:413) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.Fetcher$2.onFailure(Fetcher.java:595) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.RequestFuture.fireFailure(RequestFuture.java:177) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:147) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.RequestFutureAdapter.onFailure(RequestFutureAdapter.java:30) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onFailure(RequestFuture.java:209) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.RequestFuture.fireFailure(RequestFuture.java:177) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:147) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:559) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:390) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:293) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:209) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:271) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.internals.Fetcher.getAllTopicMetadata(Fetcher.java:251) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.clients.consumer.KafkaConsumer.listTopics(KafkaConsumer.java:1526) ~[kafka-clients-1.1.0.jar!/:na] at org.apache.kafka.streams.processor.internals.StoreChangelogReader.refreshChangelogInfo(StoreChangelogReader.java:216) ~[kafka-streams-1.1.0.jar!/:na] at org.apache.kafka.streams.processor.internals.StoreChangelogReader.initialize(StoreChangelogReader.java:113) ~[kafka-streams-1.1.0.jar!/:na] at org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:74) ~[kafka-streams-1.1.0.jar!/:na] at org.apache.kafka.streams.processor.internals.TaskManager.updateNewAndRestoringTasks(TaskManager.java:319) ~[kafka-streams-1.1.0.jar!/:na] at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:789) ~[kafka-streams-1.1.0.jar!/:na] at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750) ~[kafka-streams-1.1.0.jar!/:na] at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720) ~[kafka-streams-1.1.0.jar!/:na]

Kafka broker version : 1.0.0

hartmut-co-uk commented 6 years ago

Hi, pls have a look at https://github.com/justonedb/kafka-sink-pg-json/pull/5 where this has been partly addressed. So you may need to fork / fix from the current version.

Sadly the repo doesn't seem to be actively maintained anymore. We've got a fixed and improved fork up and running - I might try to open source / reach out to the original maintainer to get the project back on track...

Would that be an option @duncanpauly?