Closed tobegit3hub closed 5 years ago
We may send a pull-request to fix this. We can initialize the RdmaNode
by getting address from environment variable RDMA_IP
or something else instead of the hostname
.
LOCAL_SPARK_IP
->export SPARK_LOCAL_IP=$RDMA_IP
Thanks @petro-rudenko . export SPARK_LOCAL_IP=$RDMA_IP
is exactly what we add in spark-env.sh
and it did not work in yarn-cluster mode.
According to the source code of Spark, SPARK_LOCAL_IP
will be used by standalone cluster or the driver in yarn-client mode, but not yarn-cluster mode. We need extra configuration for IPs of workers in yarn cluster for RDMA NICs.
We have test by adding the actual IP and the arbitrary wrong IP in spark-env.sh
and submit with yarn-cluster mode. All the configuration will work because the driver in yarn container will not use SPARK_LOCAL_IP
and can not print its value.
Can you try to set in spark configuration:
spark.executorEnv.SPARK_LOCAL_IP=$RDMA_IP
I have tried and it doesn't work. It seems Spark executor will not evaluate the $RDMA_IP
to set the SPARK_LOCAL_IP
. It will replace $RDMA_IP
with local environment variable instead of using the one in executor.
Test with --conf "spark.executorEnv.SPARK_LOCAL_IP=$PATH"
will use the local $PATH
.
Test with --conf 'spark.executorEnv.SPARK_LOCAL_IP=$PATH'
will pass the string $PATH
.
Test with --conf "spark.executorEnv.SPARK_LOCAL_IP=${PATH}"
will use the local $PATH
.
Anyway, I have edit the source code and establish the RDMA by https://github.com/tobegit3hub/SparkRDMA/commit/aca78f8b6ec73d2b94c9e4f5fe7b95440859602f . But it may use the other IP in https://github.com/Mellanox/SparkRDMA/issues/26 which blocks our test.
Could @petro-rudenko please help to look at the commit and the issue?
Strange. It should source spark-env.sh
in all modes:
https://github.com/apache/spark/blob/master/resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala#L240
Can you dump all environment variables on executors, to make sure spark-env.sh sourced.
Can you also add on each node to /etc/hosts
:
RDMA_IP hostname-rdma
And in spark-env.sh
:
export RDMA_INTERFACE="ib0"
RDMA_IP=`/sbin/ip addr show $RDMA_INTERFACE | grep "inet\b" | awk '{print $2}' | cut -d/ -f1`
RDMA_HOST=`grep "$RDMA_IP " /etc/hosts | awk {'print $1'}`
export SPARK_LOCAL_IP=$RDMA_IP
export SPARK_LOCAL_HOSTNAME=$RDMA_HOST
Thanks for your great support @petro-rudenko . Now it works if we set the hostname to the RoCE IP so that Spark and SparkRDMA can resolve the hostname with the right IP.
I think export YARN_NODEMANAGER_OPTS="-Dyarn.nodemanager.hostname=$RDMA_IP"
will work as well but SPARK_LOCAL_IP
will not because Spark executor will not source spark-env.sh
and use this SPARK_LOCAL_IP
in executors.
Anyway, thanks for the great work of SparkRDMA. It will always work with single RDMA NIC and need to resolve hostname to the RDMA NIC if you have multiple NICs.
We have add rdma shuffle manager from SparkRDMA for Spark applications and submit the jobs in yarn-cluster mode.
Just like https://github.com/Mellanox/SparkRDMA/issues/5 , we got the error of "Fail to bind the port". We have check the used IP which is not the excepted one of RoCE NIC. But thee is no way to setup this IP in Spark workers.
We have try adding
LOCAL_SPARK_IP
inspark-env.sh
but it only works for client mode while we are using yarn-cluster mode. I'm not sure if it works by setting the server's hostname or Yarn nodemanger address with this RoCE NIC IP.