qubole / spark-on-lambda

Apache Spark on AWS Lambda
Apache License 2.0
151 stars 32 forks source link

s3a error #8

Open webroboteu opened 5 years ago

webroboteu commented 5 years ago

in the example I have attached the following problem appears, which seems to be related to the management of shuffle in spark in the s3 context. Did I confirm that the problem occurs or is it a configuration problem of mine?

ShuffleExample.scala.zip

webroboteu commented 5 years ago

this is the main error in cloudwatch

Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid directory for output-  18:48:23 at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext$DirSelector.getPathForWrite(LocalDirAllocator.java:541)  18:48:23 at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:627)  18:48:23 at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:640)  18:48:23 at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:221)  18:48:23 at org.apache.hadoop.fs.s3a.S3AOutputStream.(S3AOutputStream.java:91)  18:48:23 at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:736)  18:48:23 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:914)  18:48:23 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:895)  18:48:23 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:792)  18:48:23 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:781)  18:48:23 at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.writePartitionedFileToS3(BypassMergeSortShuffleWriter.java:269)  18:48:23 at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.writePartitionedFile(BypassMergeSortShuffleWriter.java:223)  18:48:23 at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:200)  18:48:23 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)  18:48:23 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)  18:48:23 at org.apache.spark.scheduler.Task.run(Task.scala:99)  18:48:23 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)  18:48:23 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)  18:48:23 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6

webroboteu commented 5 years ago

if I provide the configuration information I entered but followed the documentation

venkata91 commented 5 years ago

Hey @webroboteu I remember facing this issue during its development. I wanted to get back to this issue and fix it but if you fast upload of s3a it should work fine. Can you try setting this flag spark.hadoop.fs.s3a.fast.upload true if its not already set? Also there have been lot of changes in the lambda environment from then to now like the private VPC and things like that which you might have seen in the other issues. Let me know if this works.

webroboteu commented 5 years ago

I had already tried these parameters without success. Now out of desperation I was thinking of bypassing the hadoop interface and managing the stream directly. Is your email on linkedin the one you posted on your profile? I would like to have you on my network to discuss the project

webroboteu commented 5 years ago

ll try again and let you know

webroboteu commented 5 years ago

if i want to recompile it you suggest to use your hadoop version 2.6.0-qds-0.4.13 but not the reference to your repository. Can you suggest something about version 2.8 for example?

venkata91 commented 5 years ago

Right. But you can just compile with the existing open source 2.6.0 hadoop version and just copy the hadoop-aws jar later to your binary that should work as well. This is a comment I added in another issue Compiling #2

Another easier workaround is to remove the pom.xml additions basically reverting the commit "Fix pom.xml to have the other Qubole repository location having 2.6.0... (2ca6c68)"

Build your package using this command - ./dev/make-distribution.sh --name spark-lambda-2.1.0 --tgz -Phive -Phadoop-2.7 -DskipTests

And finally add the below jars to classpath before starting spark-shell

1. wget http://central.maven.org/maven2/com/amazonaws/aws-java-sdk/1.7.4/aws-java-sdk-1.7.4.jar
2. wget http://central.maven.org/maven2/org/apache/hadoop/hadoop-aws/2.7.3/hadoop-aws-2.7.3.jar
Refer here - https://markobigdata.com/2017/04/23/manipulating-files-from-s3-with-apache-spark/
webroboteu commented 5 years ago

recompiling as you say I have the following error: Exception in thread "dag-scheduler-event-loop" java.lang.NoSuchMethodError: com.amazonaws.http.AmazonHttpClient.disableStrictHostnameVerification()

webroboteu commented 5 years ago

I have a repository of a docker image: https://github.com/webroboteu/sparklambdadriver I'm using hadoop version 2.7 and its dependencies.

webroboteu commented 5 years ago

With hadoop 2.9, referring to bundle 1.11.199 with these docker lines there is progress but I still have to confirm that it works on lambda context

RUN wget http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.199/aws-java-sdk-bundle-1.11.199.jar RUN wget http://central.maven.org/maven2/org/apache/hadoop/hadoop-aws/2.9.0/hadoop-aws-2.9.0.jar RUN rm /$SPARK_HOME/jars/aws*.jar RUN mv aws-java-sdk-bundle-1.11.199.jar /$SPARK_HOME/jars RUN mv hadoop-aws-2.9.0.jar /$SPARK_HOME/jars

webroboteu commented 5 years ago

with local execution now i have this problem: java.lang.NullPointerException at org.apache.spark.util.Utils$.localFileToS3(Utils.scala:2517) at org.apache.spark.shuffle.S3ShuffleBlockResolver.writeIndexFileAndCommit(S3ShuffleBlockResolver.scala:177) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:158) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748)

I update you

webroboteu commented 5 years ago

I'm in the right direction since I can now recompile it correctly. For some strange reason try to load the data from the same executorId 4775351731

java.io.FileNotFoundException: No such file or directory: s3: //webroboteuquboleshuffle/tmp/executor-driver-4775351731/30/shuffle_0_0_0.index