gerritjvv / bigstreams

bigstreams big data kafka hadoop and file based imports
Eclipse Public License 1.0
3 stars 3 forks source link

OutOfMemoryError in collector should cause collector to fail #77

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
When an OOM is caught while using the Compressor/DeCompressor pool the 
collector should call System.exit and fail the application.

If not the application will just run but never be able to react to any new 
requests.

java.io.IOException: java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Unknown Source)
        at java.nio.DirectByteBuffer.<init>(Unknown Source)
        at java.nio.ByteBuffer.allocateDirect(Unknown Source)
        at com.hadoop.compression.lzo.LzoDecompressor.<init>(LzoDecompressor.java:186)
        at com.hadoop.compression.lzo.LzopDecompressor.<init>(LzopDecompressor.java:36)
        at com.hadoop.compression.lzo.LzopCodec.createDecompressor(LzopCodec.java:130)
        at org.streams.commons.compression.impl.CompressionPoolImpl.create(CompressionPoolImpl.java:156)
        at org.streams.commons.io.impl.ProtocolImpl.read(ProtocolImpl.java:104)

Original issue reported on code.google.com by gerritjvv@gmail.com on 2 Jun 2013 at 11:58

GoogleCodeExporter commented 8 years ago

Original comment by gerritjvv@gmail.com on 29 Oct 2013 at 9:17