pravega / pravega

Pravega - Streaming as a new software defined storage primitive
http://pravega.io
Apache License 2.0
1.98k stars 406 forks source link

All readers failed with medium IO workload due to BK ledger out of space & SegmentStore OOM #4008

Closed deenav closed 5 years ago

deenav commented 5 years ago

Problem description While running longevity test (4 worker pods with 4 writer & 8 reader each) IO with autoscaling enabled based on "EVENTS_PER_SECOND" on Pravega: 0.5.0-2291.3ccff63, Observing readers started failing within 30 mins of run and eventually all readers are stalled after ~10 hrs of run.

NOTE :- This test was running fine with Pravega: 0.5.0-2284.8c6176 build for more than 5 days consistently. But with 0.5.0-2291.3ccff63 build multiple times (more than 3 times) same test failed after running for 1 to ~10 hours with similar error.

From the logs, I see SegmentStore-1 restarted due to java.lang.OutOfMemoryError: Java heap space error and following that couple of Bookie pods are moved to not ready (0/1) state due to ERROR: Ledger directory /bk/ledgers/current is out-of-space.

Observing that Tier2 was accepting data but seems like tier1 data accumulation is very fast causing Bookies out of ledger space.

SS-1 log snip

# cat log_pod_pravega-pravega-segmentstore-1_previous.log| grep ERROR
2019-07-04 10:13:51,518 2601529 [BookKeeperClientWorker-OrderedExecutor-2-0] ERROR o.a.bookkeeper.client.PendingAddOp - Write of ledger entry to quorum failed: L690 E10043
2019-07-04 10:15:41,745 2711756 [BookKeeperClientWorker-OrderedExecutor-1-0] ERROR o.a.bookkeeper.client.PendingAddOp - Write of ledger entry to quorum failed: L73 E10534
2019-07-04 10:19:03,143 2913154 [BookKeeperClientWorker-OrderedExecutor-1-0] ERROR o.a.bookkeeper.client.PendingAddOp - Write of ledger entry to quorum failed: L1149 E4833
2019-07-04 10:20:22,108 2992119 [BookKeeperClientWorker-OrderedExecutor-2-0] ERROR o.a.bookkeeper.client.PendingAddOp - Write of ledger entry to quorum failed: L1158 E3597
2019-07-04 10:23:45,486 3195497 [BookKeeperClientWorker-OrderedExecutor-2-0] ERROR o.a.bookkeeper.client.PendingAddOp - Write of ledger entry to quorum failed: L78 E13356
2019-07-04 10:06:13,109 2143120 [epollEventLoopGroup-11-8] ERROR i.p.s.s.h.h.ServerConnectionInboundHandler - Caught exception on connection:
java.lang.IllegalStateException: Data from unexpected connection: 5c2b61cc-34e2-476d-8eae-7203aa340af9.
        at com.google.common.base.Preconditions.checkState(Preconditions.java:588)
        at io.pravega.segmentstore.server.host.handler.AppendProcessor.append(AppendProcessor.java:409)
        at io.pravega.shared.protocol.netty.Append.process(Append.java:55)
2019-07-04 10:24:52,082 3262093 [core-23] INFO  i.p.s.s.h.ZKSegmentContainerMonitor - Container Changes: Desired = [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], Current = [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], PendingTasks = [], ToStart = [], ToStop = [].
2019-07-04 10:25:02,083 3272094 [core-1] INFO  i.p.s.s.h.ZKSegmentContainerMonitor - Container Changes: Desired = [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], Current = [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], PendingTasks = [], ToStart = [], ToStop = [].
2019-07-04 10:25:12,169 3282180 [core-26] INFO  i.p.s.s.h.ZKSegmentContainerMonitor - Container Changes: Desired = [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], Current = [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], PendingTasks = [], ToStart = [], ToStop = [].
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid1.hprof ...
Heap dump file created [2671104261 bytes in 6.337 secs]
Aborting due to java.lang.OutOfMemoryError: Java heap space
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (debug.cpp:308), pid=1, tid=0x00007fd63fafcae0
#  fatal error: OutOfMemory encountered: Java heap space
#
# JRE version: OpenJDK Runtime Environment (8.0_181-b13) (build 1.8.0_181-b13)
# Java VM: OpenJDK 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 3.9.0
# Distribution: Custom build (Tue Oct 23 11:27:22 UTC 2018)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /opt/pravega/hs_err_pid1.log
#
# If you would like to submit a bug report, please include
# instructions on how to reproduce the bug and visit:
#   http://icedtea.classpath.org/bugzilla
#
"log_pod_pravega-pravega-segmentstore-1_previous.log" 10811L, 2303693C         

Bookkeeper-0 log snip:

2019-07-04 20:03:49,208 - INFO  - [SortedLedgerStorage-0:SortedLedgerStorage$1@211] - Started flushing mem table.
2019-07-04 20:03:50,422 - ERROR - [LedgerDirsMonitorThread:DiskChecker@156] - Space left on device /bk/ledgers/current : 5237764096, Used space fraction: 0.9503789 < threshold 0.95.
2019-07-04 20:03:50,424 - ERROR - [LedgerDirsMonitorThread:LedgerDirsMonitor@97] - Ledger directory /bk/ledgers/current is out-of-space : usage 0.9503789
2019-07-04 20:03:50,425 - WARN  - [LedgerDirsMonitorThread:LedgerDirsManager@232] - /bk/ledgers/current is out of space. Adding it to filled dirs list
2019-07-04 20:03:50,425 - INFO  - [LedgerDirsMonitorThread:GarbageCollectorThread@309] - Suspend Minor Compaction triggered by thread: LedgerDirsMonitorThread
2019-07-04 20:03:50,440 - ERROR - [LedgerDirsMonitorThread:LedgerDirsManager@160] - All ledger directories are non writable
org.apache.bookkeeper.bookie.LedgerDirsManager$NoWritableLedgerDirException: All ledger directories are non writable
        at org.apache.bookkeeper.bookie.LedgerDirsManager.getWritableLedgerDirs(LedgerDirsManager.java:158)
        at org.apache.bookkeeper.bookie.LedgerDirsMonitor.check(LedgerDirsMonitor.java:109)
        at org.apache.bookkeeper.bookie.LedgerDirsMonitor.lambda$start$2(LedgerDirsMonitor.java:190)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Bookie-3 log snip

2019-07-04 13:43:06,022 - INFO  - [SortedLedgerStorage-0:SortedLedgerStorage$1@211] - Started flushing mem table.
2019-07-04 13:43:06,390 - ERROR - [LedgerDirsMonitorThread:DiskChecker@156] - Space left on device /bk/ledgers/current : 5175246848, Used space fraction: 0.9509712 < threshold 0.95.
2019-07-04 13:43:06,391 - ERROR - [LedgerDirsMonitorThread:LedgerDirsMonitor@97] - Ledger directory /bk/ledgers/current is out-of-space : usage 0.9509712
2019-07-04 13:43:06,391 - WARN  - [LedgerDirsMonitorThread:LedgerDirsManager@232] - /bk/ledgers/current is out of space. Adding it to filled dirs list
2019-07-04 13:43:06,391 - INFO  - [LedgerDirsMonitorThread:GarbageCollectorThread@309] - Suspend Minor Compaction triggered by thread: LedgerDirsMonitorThread
2019-07-04 13:43:06,392 - ERROR - [LedgerDirsMonitorThread:LedgerDirsManager@160] - All ledger directories are non writable
org.apache.bookkeeper.bookie.LedgerDirsManager$NoWritableLedgerDirException: All ledger directories are non writable
        at org.apache.bookkeeper.bookie.LedgerDirsManager.getWritableLedgerDirs(LedgerDirsManager.java:158)
        at org.apache.bookkeeper.bookie.LedgerDirsMonitor.check(LedgerDirsMonitor.java:109)
        at org.apache.bookkeeper.bookie.LedgerDirsMonitor.lambda$start$2(LedgerDirsMonitor.java:190)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

POD status

$ kubectl get po --all-namespaces
NAMESPACE           NAME                                                 READY   STATUS      RESTARTS   AGE
default             pravega-bookie-0                                     0/1     Running     0          25h
default             pravega-bookie-1                                     1/1     Running     0          25h
default             pravega-bookie-2                                     1/1     Running     0          25h
default             pravega-bookie-3                                     0/1     Running     0          25h
default             pravega-operator-77554dc594-v2n2l                    1/1     Running     0          25h
default             pravega-pravega-controller-849c764b47-h49vr          1/1     Running     0          25h
default             pravega-pravega-controller-849c764b47-jn5z4          1/1     Running     0          25h
default             pravega-pravega-segmentstore-0                       1/1     Running     0          25h
default             pravega-pravega-segmentstore-1                       1/1     Running     1          25h
default             pravega-pravega-segmentstore-2                       1/1     Running     0          25h
default             pravega-zk-0                                         1/1     Running     0          25h
default             pravega-zk-1                                         1/1     Running     0          25h
default             pravega-zk-2                                         1/1     Running     0          25h
default             shaka-zulu-nfs-client-provisioner-5bcd87b9dc-49jg9   1/1     Running     0          25h
default             zookeeper-operator-747c796d9-zlm48                   1/1     Running     0          25h
pravega-longevity   pod/mid-scale-manager-7bf76b5686-m8p54                   1/1     Running     0          23h
pravega-longevity   pod/mid-scale-worker-5db76cbcf7-6gvw2                    1/1     Running     0          23h
pravega-longevity   pod/mid-scale-worker-5db76cbcf7-rns6p                    1/1     Running     0          23h
pravega-longevity   pod/mid-scale-worker-5db76cbcf7-t4kj6                    1/1     Running     0          23h
pravega-longevity   pod/mid-scale-worker-5db76cbcf7-vlnxm                    1/1     Running     0          23h
deenav commented 5 years ago

Sharing the cluster logs in internal channel due to size constraint.

fpj commented 5 years ago

Please share more detail about the deployment, like:

fpj commented 5 years ago

Is this the workload that you are running:

mid-scale: |
    {
      "type": "Pravega",
      "name": "longevity/midScale",
      "forever": true,
      "scope": "longevity",
      "stream": "midScale",
      "payload": {
        "numberOfKeys": 512,
        "dynamicKeyProbability": true
      },
      "createStream": true,
      "throttle": {
        "maxEventsPerSecond": 4000,
        "maxOutstandingAcks": 4000,
        "dynamicThrottlePeriod": 180
      },
      "streamPolicies": {
        "scalingType": "EVENTS_PER_SECOND",
        "targetRate": 2000,
        "scaleFactor": 2,
        "minNumSegments": 8,
        "retentionType": "LIMITED_TIME_MILLIS",
        "retentionLimit": 3600000
      },
      "tasks": [
        {
          "numReaders": 8,
          "numWriters": 4,
          "duplicates": 4
        }
      ],
      "longevityAssertions": {
        "isRunningState": null,
        "hasAtLeastXActiveReaders": 32,
        "hasAtLeastXActiveWriters": 16,
        "readerBytesAreIncreasing": null,
        "writerBytesAreIncreasing": null,
        "eventsInSequence": null
      }
    }

https://github.com/pravega/pravega-longevity/blob/master/charts/pravega-longevity/templates/pravega-longevity-tests.yaml#L476

deenav commented 5 years ago

@fpj Yes, I was using the same workload you posted above. Please see below other inlines.

What's the size of the BK volumes that you are using?

BK volumes (ledgers & journals) size are 100GB, this is the same manifest we are using for all the test runs including #2284 build where no problem was there.

/dev/sdg         99G   12G   87G  13% /bk/journal
/dev/sde         99G   94G  4.9G  96% /bk/ledgers
/dev/sdf        9.8G   40M  9.7G   1% /bk/index

Are Tier 2 writes happening at all? If so, at what rate?

Yes, Tier2 writes were happening but I am not sure about the rate.

Is Tier 2 full or shared?

Tier2 is based on Isilon backend and shared across other environments but it is not full (see below)

[deena@master pravega-k8s-logs]$ kubectl exec -it pravega-pravega-segmentstore-0 /bin/sh
/opt/pravega #
/opt/pravega # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  49.1G      3.3G     43.2G   7% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                    15.7G         0     15.7G   0% /sys/fs/cgroup
10.249.249.131:/ifs/pravega-test/pks/default-pravega-tier2-pvc-8ffb99f8-9e3b-11e9-b8e4-005056ba9070
                        194.7T     34.2T    156.7T  18% /mnt/tier2
RaulGracia commented 5 years ago

The main problem described in this issue is related to a mismatch among:

I have executed a run of 0.5 with a more aggressive Bookkeeper data compaction parameters:

minorCompactionThreshold: "0.4"
minorCompactionInterval: "1800"
majorCompactionThreshold: "0.8"
majorCompactionInterval: "43200"
isForceGCAllowWhenNoSpace: "true"
diskUsageThreshold: "0.90"
diskUsageWarnThreshold: "0.85"
diskUsageLwmThreshold: "0.85"

Apart from that, I have also used correct prefix for RocksDB params:

rocksdb.writeBufferSizeMB: "512"
rocksdb.readCacheSizeMB: "2048"

After +11 hours of execution of mid-scale longevity workload, Bookies have been able to keep the storage of their volumes under reasonable levels:

kubectl exec -it pravega-bookie-0 -- df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb        9.8G   60M  9.8G   1% /bk/index
/dev/sda1        95G   21G   74G  23% /etc/hosts
/dev/sdd         98G   65G   34G  67% /bk/ledgers
/dev/sdc         98G   13G   85G  14% /bk/journal
kubectl exec -it pravega-bookie-1 -- df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd        9.8G   65M  9.8G   1% /bk/index
/dev/sda1        95G   25G   70G  26% /etc/hosts
/dev/sdb         98G   26G   73G  26% /bk/ledgers
/dev/sdc         98G   13G   86G  13% /bk/journal
kubectl exec -it pravega-bookie-2 -- df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        95G  8.7G   86G  10% /etc/hosts
/dev/sde         98G   24G   75G  24% /bk/ledgers
/dev/sdc         98G   12G   87G  12% /bk/journal
/dev/sdd        9.8G   65M  9.8G   1% /bk/index
kubectl exec -it pravega-bookie-3 -- df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd         98G   38G   61G  39% /bk/ledgers
/dev/sdc         98G   13G   86G  13% /bk/journal
/dev/sde        9.8G   67M  9.7G   1% /bk/index

Please, incorporate the parameter changes in your Pravega manifests to avoid this issue from appearing again.