gettyimages / docker-spark

Docker build for Apache Spark
MIT License
676 stars 370 forks source link

"100M" Error reading csv file from S3 #66

Closed alanchn31 closed 4 years ago

alanchn31 commented 4 years ago

I am running this command: spark-submit load_ratings.py --conf "fs.s3a.multipart.size=104857600" and I encountered an error running my Python Script that says:

py4j.protocol.Py4JJavaError: An error occurred while calling o39.csv. [2020-05-26 06:31:14,237] {bash_operator.py:126} INFO - : java.lang.NumberFormatException: For input string: "100M"

Error Trace: Traceback (most recent call last): [2020-05-26 08:58:46,289] {bash_operator.py:126} INFO - File "/usr/local/airflow/dags/python_scripts/load_ratings.py", line 52, in [2020-05-26 08:58:46,307] {bash_operator.py:126} INFO - schema=ratings_schema) [2020-05-26 08:58:46,309] {bash_operator.py:126} INFO - File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 472, in csv [2020-05-26 08:58:46,332] {bash_operator.py:126} INFO - File "/usr/spark-2.4.1/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in call [2020-05-26 08:58:46,339] {bash_operator.py:126} INFO - File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco [2020-05-26 08:58:46,347] {bash_operator.py:126} INFO - File "/usr/spark-2.4.1/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value [2020-05-26 08:58:46,380] {bash_operator.py:126} INFO - py4j.protocol.Py4JJavaError: An error occurred while calling o30.csv. [2020-05-26 08:58:46,381] {bash_operator.py:126} INFO - : java.lang.NumberFormatException: For input string: "100M" [2020-05-26 08:58:46,381] {bash_operator.py:126} INFO - at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) [2020-05-26 08:58:46,381] {bash_operator.py:126} INFO - at java.lang.Long.parseLong(Long.java:589) [2020-05-26 08:58:46,382] {bash_operator.py:126} INFO - at java.lang.Long.parseLong(Long.java:631) [2020-05-26 08:58:46,382] {bash_operator.py:126} INFO - at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1499) [2020-05-26 08:58:46,382] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:248) [2020-05-26 08:58:46,382] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288) [2020-05-26 08:58:46,383] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) [2020-05-26 08:58:46,383] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) [2020-05-26 08:58:46,383] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305) [2020-05-26 08:58:46,384] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) [2020-05-26 08:58:46,390] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) [2020-05-26 08:58:46,398] {bash_operator.py:126} INFO - at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:547) [2020-05-26 08:58:46,404] {bash_operator.py:126} INFO - at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545) [2020-05-26 08:58:46,404] {bash_operator.py:126} INFO - at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) [2020-05-26 08:58:46,410] {bash_operator.py:126} INFO - at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) [2020-05-26 08:58:46,410] {bash_operator.py:126} INFO - at scala.collection.immutable.List.foreach(List.scala:392) [2020-05-26 08:58:46,420] {bash_operator.py:126} INFO - at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) [2020-05-26 08:58:46,421] {bash_operator.py:126} INFO - at scala.collection.immutable.List.flatMap(List.scala:355) [2020-05-26 08:58:46,421] {bash_operator.py:126} INFO - at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545) [2020-05-26 08:58:46,421] {bash_operator.py:126} INFO - at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359) [2020-05-26 08:58:46,422] {bash_operator.py:126} INFO - at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223) [2020-05-26 08:58:46,431] {bash_operator.py:126} INFO - at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211) [2020-05-26 08:58:46,437] {bash_operator.py:126} INFO - at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:615) [2020-05-26 08:58:46,443] {bash_operator.py:126} INFO - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [2020-05-26 08:58:46,450] {bash_operator.py:126} INFO - at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [2020-05-26 08:58:46,451] {bash_operator.py:126} INFO - at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [2020-05-26 08:58:46,456] {bash_operator.py:126} INFO - at java.lang.reflect.Method.invoke(Method.java:498) [2020-05-26 08:58:46,456] {bash_operator.py:126} INFO - at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) [2020-05-26 08:58:46,457] {bash_operator.py:126} INFO - at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) [2020-05-26 08:58:46,475] {bash_operator.py:126} INFO - at py4j.Gateway.invoke(Gateway.java:282) [2020-05-26 08:58:46,475] {bash_operator.py:126} INFO - at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) [2020-05-26 08:58:46,476] {bash_operator.py:126} INFO - at py4j.commands.CallCommand.execute(CallCommand.java:79) [2020-05-26 08:58:46,495] {bash_operator.py:126} INFO - at py4j.GatewayConnection.run(GatewayConnection.java:238) [2020-05-26 08:58:46,496] {bash_operator.py:126} INFO - at java.lang.Thread.run(Thread.java:748)

Not sure what happened here, but I am guessing fs.s3a.multipart.size should not be configured to "100M". I already tried to change it in my spark-submit command but somehow it was still not picked up.

Link to my code repo: https://github.com/alanchn31/Udacity-DE-ND-Capstone

alanchn31 commented 4 years ago

Issue is resolved. I added jar files in my Docker file:

Add hadoop-aws to access Amazon S3ADD https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/2.7.5/hadoop-aws-2.7.5.jar $SPARK_HOME/jars# Add postgres JDBC jar driverADD https://jdbc.postgresql.org/download/postgresql-42.2.12.jar $SPARK_HOME/jars

But these jar files are already present in my Hadoop Lib folder, causing versioning conflict. The environment were using jar files of an older version.