pravega / pravega-benchmark

Performance benchmark tool for Pravega
Apache License 2.0
8 stars 22 forks source link

Issue 52: Don't create scope or scale stream #93

Closed maddisondavid closed 4 years ago

maddisondavid commented 4 years ago

Change log description Adds two new flags that allow better control over whether the Scope is created and if the Stream should be modified (scaled). This allows the benchmark to use existing streams safely without performing ANY modifications.

Purpose of the change Fixes #52

What the code does The benchmark attempts to create the Stream before running the test, however in some instances the stream already exists (and in some environments it's not valid to create a stream). This PR introduces the --createScope flag to resolve this, which by default is true in order to be backward compatible.

Allows a -segments -1 to be specified that indicates that the number of stream segments should not be modified for an existing stream.

It also changes to lazy initialization of the StreamConfiguration otherwise even when consuming an existing stream with no modifications (using the flags above) the benchmark still expects a -segments argument to be passed in. With this change the StreamConfiguration will ONLY be calculated if it's needed.

 -createScope <arg>                   indicates that the scope should be created (true by default)
 -segments <arg>                      Number of segments. If Stream
                                      auto-scaling is enabled, this is the
                                      initial number of segments.  If
                                      stream exists using -1 indicates the
                                      stream configuration should not be
                                      modified.

How to verify Create a Benchmark test consuming an existing stream with -createScope false and -scaleStream false, -segments is no longer required

>pravega-benchmark -controller tcp://localhost:9090 -consumers 3 -scope test -stream test1 -createScope false -time 10
...
...
...
2020-01-15 13:00:58:175 +0000 [ForkJoinPool-1-worker-1] INFO io.pravega.perf.PerfStats - 26844 records Reading, 2454.421 records/sec, 0 bytes record size, 0.94 MiB/sec, 0.0 ms avg latency, 84.0 ms max latency, 0 ms 50th, 0 ms 75th, 0 ms 95th, 1 ms 99th, 1 ms 99.9th, 6 ms 99.99th.

Note that if you specify -segments -1 for a new stream the benchmark will throw an error when attempting to access the stream since -segments -1 tells it to skip creation or scaling of a stream.

>pravega-benchmark -controller tcp://localhost:9090 -producers 3 -scope dave -stream dave5 -createScope false -segments -1 -time 10 -size 400

pravega-benchmark -controller tcp://localhost:9090 -producers 3 -scope dave -stream dave5 -createScope false -segments -1 -time 10 -size 400
2020-01-15 13:53:51:705 +0000 [main] INFO io.pravega.client.stream.impl.ControllerImpl - Controller client connecting to server at localhost:9090
2020-01-15 13:53:51:708 +0000 [main] INFO io.pravega.client.stream.impl.ControllerImpl - Controller client connecting to server at localhost:9090
2020-01-15 13:53:51:715 +0000 [main] WARN io.pravega.client.netty.impl.ConnectionPoolImpl - Epoll not available. Falling back on NIO.
2020-01-15 13:53:51:720 +0000 [main] WARN io.pravega.client.netty.impl.ConnectionPoolImpl - Epoll not available. Falling back on NIO.
2020-01-15 13:53:51:731 +0000 [main] INFO io.pravega.client.stream.impl.ClientFactoryImpl - Creating writer: 3e65fb2c-6c59-42c4-9cdb-18862528fea3 for stream: dave5 with configuration: EventWriterConfig(initalBackoffMillis=1, maxBackoffMillis=20000, retryAttempts=10, backoffMultiple=10, enableConnectionPooling=true, transactionTimeoutTime=89999, automaticallyNoteTime=false)
2020-01-15 13:53:51:737 +0000 [main] INFO io.pravega.client.stream.impl.SegmentSelector - Refreshing segments for stream StreamImpl(scope=dave, streamName=dave5)
2020-01-15 13:53:51:782 +0000 [fetch-controllers-1] INFO io.pravega.client.stream.impl.ControllerResolverFactory - Attempting to refresh the controller server endpoints
2020-01-15 13:53:51:786 +0000 [fetch-controllers-1] INFO io.pravega.client.stream.impl.ControllerResolverFactory - Updating client with controllers: [[[localhost/127.0.0.1:9090]/{}]]
2020-01-15 13:53:51:934 +0000 [fetch-controllers-1] INFO io.pravega.client.stream.impl.ControllerResolverFactory - Rescheduling ControllerNameResolver task for after 120000 ms
2020-01-15 13:53:51:999 +0000 [grpc-default-executor-0] WARN io.pravega.client.stream.impl.ControllerImpl - gRPC call for getCurrentSegments with trace id 0 failed with server error.
io.grpc.StatusRuntimeException: NOT_FOUND: /store/dave/dave5/state
        at io.grpc.Status.asRuntimeException(Status.java:530)
        at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:434)
        at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
        at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
        at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
        at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
        at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
        at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
        at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:694)
        at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
        at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
        at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
        at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:397)
        at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
        at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
        at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
        at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
        at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Signed-off-by: David Maddison david.maddison@dell.com

maddisondavid commented 4 years ago

Typos fixed, ready for merging. We can address @claudiofahey comment about deleteReaderGroup catching the exception in another PR