2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - 2024-08-13 11:52:04 ERROR Worker [dispatcher-event-loop-1]:94 - Failed to start external shuffle service
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - java.net.BindException: Address already in use
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at java.base/sun.nio.ch.Net.bind0(Native Method)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at java.base/sun.nio.ch.Net.bind(Net.java:459)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at java.base/sun.nio.ch.Net.bind(Net.java:448)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:562)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:260)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
2024-08-13 11:52:04 INFO WorkerLauncher$ [Thread-3]:197 - at java.base/java.lang.Thread.run(Thread.java:829)
2024-08-13 11:52:04 WARN WorkerLauncher$ [Spark Thread]:157 - Spark exit value: 1
2024-08-13 11:52:07 INFO DiscoveryService$ [main]:54 - Waiting for spark component address in file worker_address, retry (2)
2024-08-13 11:52:07 INFO DiscoveryService$ [main]:52 - Waiting for spark component address in file worker_address, sleep 5 seconds before next retry
2024-08-13 11:52:12 INFO DiscoveryService$ [main]:54 - Waiting for spark component address in file worker_address, retry (3)
2024-08-13 11:52:12 INFO DiscoveryService$ [main]:52 - Waiting for spark component address in file worker_address, sleep 5 seconds before next retry
2024-08-13 11:52:17 INFO DiscoveryService$ [main]:54 - Waiting for spark component address in file worker_address, retry (4)
2024-08-13 11:52:17 INFO DiscoveryService$ [main]:52 - Waiting for spark component address in file worker_address, sleep 5 seconds before next retry
...
If one of spark components already failed for sure, e.g. exited with non-zero code, it is important to fail the whole YT job as soon as possible for faster recovery. Now, a discovery service makes 60 retries with 5 second backoff, which prolongs the lifetime of a job by 5 minutes, making these crash-loops really long in case of some transient issues (e.g. failure to bind to a port).
If one of spark components already failed for sure, e.g. exited with non-zero code, it is important to fail the whole YT job as soon as possible for faster recovery. Now, a discovery service makes 60 retries with 5 second backoff, which prolongs the lifetime of a job by 5 minutes, making these crash-loops really long in case of some transient issues (e.g. failure to bind to a port).