Closed liyinan926 closed 6 years ago
The checkstyle failures (in the Full Build test on this PR) seems unrelated to our code - maybe it's broken in upstream/master?
rerun integration test please
rerun integration test please
@kimoonkim, is there a way to tell if it's running the new or the old? Or is it that - if the PR is against master, it will always run the new setup?
We are running the new integration test repo code. I modified other builds like unit tests to exclude the master branch. So going forward, only integration test will trigger from the master branch.
if the PR is against master, it will always run the new setup? Correct.
X default Details
The failed Jenkins build with "default" label is actually from the new integration test Jenkins job. I'll find a way to change the label.
rerun integration test please
rerun integration test please
The latest two Jenkins jobs ran the new integration tests. The "Make Distribution" job built a distro tarball off this PR. The "Integration Tests" job ran tests against the tarball. It failed because of config issue that I just fixed.
@kimoonkim, we should see the make distribution and integration tests pass now?
I am hoping the next runs will pass. Getting there.
Ok. It seems the latest test failure is genuine. @liyinan926 Can you please take a look? Maybe your branch is outdated and need to merge apache/spark#20051
From http://spark-k8s-jenkins.pepperdata.org:8080/job/pr-spark-integration/5/:
Discovery starting. Discovery completed in 145 milliseconds. Run starting. Expected test count is: 2 KubernetesSuite: RUN ABORTED com.spotify.docker.client.exceptions.DockerException: ProgressMessage{id=null, status=null, stream=null, error=lstat dockerfiles/spark-base/entrypoint.sh: no such file or directory, progress=null, progressDetail=null} at com.spotify.docker.client.LoggingBuildHandler.progress(LoggingBuildHandler.java:33) at com.spotify.docker.client.DefaultDockerClient.build(DefaultDockerClient.java:1157) at org.apache.spark.deploy.k8s.integrationtest.docker.SparkDockerImageBuilder.buildImage(SparkDockerImageBuilder.scala:70) at org.apache.spark.deploy.k8s.integrationtest.docker.SparkDockerImageBuilder.buildSparkDockerImages(SparkDockerImageBuilder.scala:64) at org.apache.spark.deploy.k8s.integrationtest.backend.minikube.MinikubeTestBackend.initialize(MinikubeTestBackend.scala:31) at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.beforeAll(KubernetesSuite.scala:42) at org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187) at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.beforeAll(KubernetesSuite.scala:33) at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253) at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.org$scalatest$BeforeAndAfter$$super$run(KubernetesSuite.scala:33) ...
Might be best to rebase onto upstream/master
On Fri, Dec 22, 2017 at 3:37 PM, Kimoon Kim notifications@github.com wrote:
Ok. It seems the latest test failure is genuine. @liyinan926 https://github.com/liyinan926 Can you please take a look? Maybe your branch is outdated and need to merge apache/spark#20051 https://github.com/apache/spark/pull/20051
From http://spark-k8s-jenkins.pepperdata.org:8080/job/pr- spark-integration/5/:
Discovery starting. Discovery completed in 145 milliseconds. Run starting. Expected test count is: 2 KubernetesSuite: RUN ABORTED com.spotify.docker.client.exceptions.DockerException: ProgressMessage{id=null, status=null, stream=null, error=lstat dockerfiles/spark-base/entrypoint.sh: no such file or directory, progress=null, progressDetail=null} at com.spotify.docker.client.LoggingBuildHandler.progress( LoggingBuildHandler.java:33) at com.spotify.docker.client.DefaultDockerClient.build( DefaultDockerClient.java:1157) at org.apache.spark.deploy.k8s.integrationtest.docker. SparkDockerImageBuilder.buildImage(SparkDockerImageBuilder.scala:70) at org.apache.spark.deploy.k8s.integrationtest.docker. SparkDockerImageBuilder.buildSparkDockerImages( SparkDockerImageBuilder.scala:64) at org.apache.spark.deploy.k8s.integrationtest.backend. minikube.MinikubeTestBackend.initialize(MinikubeTestBackend.scala:31) at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.beforeAll( KubernetesSuite.scala:42) at org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll. scala:187) at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.beforeAll( KubernetesSuite.scala:33) at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253) at org.apache.spark.deploy.k8s.integrationtest.KubernetesSuite.org $scalatest$BeforeAndAfter$$super$run(KubernetesSuite.scala:33) ...
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/apache-spark-on-k8s/spark/pull/582#issuecomment-353690880, or mute the thread https://github.com/notifications/unsubscribe-auth/AA3U55v1djkfJyfpnHEIeKpkVo0dJDpSks5tDD1CgaJpZM4RLYI3 .
-- Anirudh Ramanathan
Rebased onto latest upstream/master.
Integration test has passed now!
rerun integration tests please
rerun integration tests please
Closing as the upstream has been merged.
This is the same PR as https://github.com/apache/spark/pull/19954, but against our fork for triggering integration tests.
@kimoonkim @foxish