Closed saratvemulapalli closed 2 years ago
Looks like we are doing the right thing and not allowing reflection access. See https://github.com/netty/netty/issues/7817
Changes are merged. Here is how its going to look like:
./gradlew run
> Task :run
21:58:16.284 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [force_merge], size [1], queue size [unbounded]
21:58:16.288 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [fetch_shard_started], core [1], max [24], keep alive [5m]
21:58:16.289 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [listener], size [6], queue size [unbounded]
21:58:16.289 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [refresh], core [1], max [6], keep alive [5m]
21:58:16.292 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [system_write], size [5], queue size [1k]
21:58:16.292 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [generic], core [4], max [128], keep alive [30s]
21:58:16.292 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [warmer], core [1], max [5], keep alive [5m]
21:58:16.294 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [search], size [19], queue size [1k]
21:58:16.294 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [flush], core [1], max [5], keep alive [5m]
21:58:16.295 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [fetch_shard_store], core [1], max [24], keep alive [5m]
21:58:16.295 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [management], core [1], max [5], keep alive [5m]
21:58:16.295 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [get], size [12], queue size [1k]
21:58:16.295 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [analyze], size [1], queue size [16]
21:58:16.295 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [system_read], size [5], queue size [2k]
21:58:16.296 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [write], size [12], queue size [10k]
21:58:16.296 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [snapshot], core [1], max [5], keep alive [5m]
21:58:16.296 [main] DEBUG org.opensearch.threadpool.ThreadPool - created thread pool: name [search_throttled], size [1], queue size [100]
21:58:17.167 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 24
21:58:17.167 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 24
21:58:17.167 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
21:58:17.167 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 9
21:58:17.167 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 4194304
21:58:17.168 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
21:58:17.168 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
21:58:17.168 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
21:58:17.168 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
21:58:17.168 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimIntervalMillis: 0
21:58:17.168 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: false
21:58:17.168 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
21:58:17.188 [main] INFO org.opensearch.transport.NettyAllocator - creating NettyAllocator with the following configs: [name=opensearch_configured, chunk_size=256kb, suggested_max_allocation_size=256kb, factors={opensearch.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=1mb}]
21:58:17.223 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 24
21:58:17.245 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
21:58:17.245 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
21:58:17.274 [main] DEBUG org.opensearch.transport.netty4.Netty4Transport - using profile[default], worker_count[12], port[4532], bind_host[[127.0.0.1]], publish_host[[]], receive_predictor[64kb->64kb]
21:58:17.289 [main] DEBUG org.opensearch.transport.TcpTransport - binding server bootstrap to: [127.0.0.1]
21:58:17.303 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 21523 (auto-detected)
21:58:17.314 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 3c:22:fb:ff:fe:88:f1:1c (auto-detected)
21:58:17.336 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
21:58:17.336 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
21:58:17.336 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
21:58:17.357 [main] DEBUG org.opensearch.transport.TcpTransport - Bound profile [default] to address {127.0.0.1:4532}
21:58:17.361 [main] INFO org.opensearch.transport.TransportService - publish_address {127.0.0.1:4532}, bound_addresses {127.0.0.1:4532}
What is the bug?
Logs of gradle run spitting out warnings:
How can one reproduce the bug?
feature/extensions
to Maven Local./gradle run
What is the expected behavior?
Well, run without warnings :)
What is your host/environment?
MacOS