aws / amazon-kinesis-video-streams-parser-library

Amazon Kinesis Video Streams parser library is for developers to include in their applications that makes it easy to work with the output of video streams such as retrieving frame-level objects, metadata for fragments, and more.
Apache License 2.0
103 stars 52 forks source link

GC overhead limit exceeded #98

Closed GilShalev2017 closed 4 years ago

GilShalev2017 commented 4 years ago

Hi guys,

Sometimes I'm getting the above exception (can not tell the exact scenario) when pulling from kinesis video stream and performing some processing on the frames.

Can you point me to possible reasons for it? could it be because of bad data?

The Stack trace is as follows: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded EX_STACK_TRACE \n Exception Stack Trace: \njava.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:192)\n\tat com.intel.cloudfreed.decoderworker.App.main(App.java:148)\nCaused by: java.lang.OutOfMemoryError: GC overhead limit exceeded\n\tat java.util.Arrays.copyOf(Arrays.java:3236)\n\tat java.io.ByteArrayOutputStream.toByteArray(ByteArrayOutputStream.java:191)\n\tat com.sun.crypto.provider.GaloisCounterMode.decryptFinal(GaloisCounterMode.java:553)\n\tat com.sun.crypto.provider.CipherCore.finalNoPadding(CipherCore.java:1049)\n\tat com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:985)\n\tat com.sun.crypto.provider.AESCipher.engineDoFinal(AESCipher.java:491)\n\tat javax.crypto.Cipher.doFinal(Cipher.java:2376)\n\tat sun.security.ssl.CipherBox.decrypt(CipherBox.java:461)\n\tat sun.security.ssl.InputRecord.decrypt(InputRecord.java:172)\n\tat sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1025)\n\tat sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:940)\n\tat sun.security.ssl.AppInputStream.read(AppInputStream.java:105)\n\tat org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)\n\tat org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)\n\tat org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:189)\n\tat org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)\n\tat com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)\n\tat com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)\n\tat java.io.BufferedInputStream.fill(BufferedInputStream.java:246)\n\tat java.io.BufferedInputStream.read(BufferedInputStream.java:265)\n\tat com.amazonaws.kinesisvideo.parser.ebml.InputStreamParserByteSource.eof(InputStreamParserByteSource.java:78)\n\tat com.amazonaws.kinesisvideo.parser.mkv.StreamingMkvReader.mightHaveNext(StreamingMkvReader.java:95)\n\tat com.amazonaws.kinesisvideo.parser.mkv.StreamingMkvReader.apply(StreamingMkvReader.java:129)\n\tat com.intel.cloudfreed.decoderworker.DecoderWorker.run(DecoderWorker.java:99)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"}

your help is much needed.

MushMal commented 4 years ago

Looks to be more like a real out of memory issue. Do you have any details on the GC memory configuration? What’s the stream density? Are you parsing multiple streams/fragment in parallel assuming the issue really is the vm memory

GilShalev2017 commented 4 years ago

Yes it is an out of memory, but the question is why is it happening? Can it be because of "bad data" / "faulted mkv fragments" ? I'm parsing just one kvs stream. each frame at a time. The stream density is 120 mbps After consuming I'm sending to a cpp decoder, which its library (libav) has several threads Regarding the GC - we are using the default one

Can it be that at a certain moment of the streaming - we get a very fast push of "damaged / faked frames" that causes our data structures to allocate too many buffers at once and have a stiff memory surge that crashes our application?

MushMal commented 4 years ago

@GilShalev2017 sees you've closed this issue. Could you please include more information on what the JVM memory configuration is? There are numerous tools you could use to get more information on the available memory. The parser library does allocate the memory to store the the running elements but that should be bound. The failure/dropped frames on the producer side might cause replay of the previous fragment(s) to ensure the delivery that might cause a temporary spike in the density of the data. As this is a failure in the SSL and deep in the networking stack, I doubt that this is a "broken" data causing large allocation.

MushMal commented 4 years ago

Resolving this issue as no further action seem to be needed. Please reopen if needed and provide more details.