raintank / raintank-docker

raintank docker images and dev stack DEPRECATED / UNMAINTAINED
https://blog.raintank.io/docker-based-development-environment/
16 stars 4 forks source link

kafka failed to read chunk -> Unexpected (unknown?) server error #67

Closed Dieterbe closed 8 years ago

Dieterbe commented 8 years ago

fake_metrics:

2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to produce message to topic mdm: kafka server: Unexpected (unknown?) server error.
2016/07/09 21:26:30 [log.go:209 writerMsg()] [E] kafka: Failed to deliver 198 messages.

kafka-stdout:

[2016-07-09 21:19:28,682] ERROR [Replica Manager on Broker 0]: Error processing append operation on partition mdm-0 (kafka.server.ReplicaManager)
kafka.common.KafkaException: 
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
    at kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
    at kafka.log.Log.liftedTree1$1(Log.scala:339)
    at kafka.log.Log.append(Log.scala:338)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
    at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
    at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
    at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
    at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
    at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
    at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: failed to read chunk
    at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
    at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at java.io.DataInputStream.readLong(DataInputStream.java:416)
    at kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
    ... 35 more
[2016-07-09 21:19:35,887] ERROR [Replica Manager on Broker 0]: Error processing append operation on partition mdm-0 (kafka.server.ReplicaManager)
kafka.common.KafkaException: 
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
    at kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
    at kafka.log.Log.liftedTree1$1(Log.scala:339)
    at kafka.log.Log.append(Log.scala:338)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
    at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
    at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
    at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
    at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
    at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
    at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: failed to read chunk
    at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
    at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at java.io.DataInputStream.readLong(DataInputStream.java:416)
    at kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
    ... 35 more
[2016-07-09 21:19:36,183] ERROR [Replica Manager on Broker 0]: Error processing append operation on partition mdm-0 (kafka.server.ReplicaManager)
kafka.common.KafkaException: 
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
    at kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
    at kafka.log.Log.liftedTree1$1(Log.scala:339)
    at kafka.log.Log.append(Log.scala:338)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
    at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
    at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
    at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
    at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
    at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
    at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: failed to read chunk
    at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
    at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at java.io.DataInputStream.readLong(DataInputStream.java:416)
    at kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
    ... 35 more
[2016-07-09 21:19:56,987] ERROR [Replica Manager on Broker 0]: Error processing append operation on partition mdm-0 (kafka.server.ReplicaManager)
kafka.common.KafkaException: 
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
    at kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
    at kafka.log.Log.liftedTree1$1(Log.scala:339)
    at kafka.log.Log.append(Log.scala:338)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
    at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
    at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
    at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
    at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
    at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
    at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: failed to read chunk
    at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
    at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at java.io.DataInputStream.readLong(DataInputStream.java:416)
    at kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
    ... 35 more
[2016-07-09 21:20:01,597] ERROR [Replica Manager on Broker 0]: Error processing append operation on partition mdm-0 (kafka.server.ReplicaManager)
kafka.common.KafkaException: 
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
    at kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
    at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
    at kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
    at kafka.log.Log.liftedTree1$1(Log.scala:339)
    at kafka.log.Log.append(Log.scala:338)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
    at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
    at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
    at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
    at kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
    at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
    at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
    at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
    at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: failed to read chunk
    at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
    at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at java.io.DataInputStream.readLong(DataInputStream.java:416)
    at kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
    at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
    ... 35 more

probably due to https://issues.apache.org/jira/browse/KAFKA-3764

root@kafka:/# find . -name '*snappy*'
./opt/kafka_2.11-0.10.0.0/libs/snappy-java-1.1.2.4.jar
./tmp/snappy-1.1.2-08aeb95b-078e-431c-b5cf-dcc5c375e5fa-libsnappyjava.so
root@kafka:/# 

will upgrade snappy lib

Dieterbe commented 8 years ago

kafka now also "officially" uses that snappy version. https://github.com/apache/kafka/pull/1467