Open littletiger123 opened 3 years ago
@littletiger123 sparkConf is for valid spark configuration options, e.g.
spec:
...
sparkConf:
spark.eventLog.dir: "s3a://my_bucket_name/eventLogFolder"
spark.eventLog.enabled: "true"
spark.sql.catalogImplementation: "hive"
spark.hadoop.fs.s3a.connection.ssl.enabled: "true"
spark.hadoop.fs.s3a.endpoint: https://s3.us-west-2.amazonaws.com
spark.hadoop.fs.s3a.fast.upload: "true"
spark.hadoop.fs.s3a.impl: org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.path.style.access: "true"
spark.hadoop.hive.input.format: io.delta.hive.HiveInputFormat
spark.hadoop.hive.metastore.client.connect.retry.delay: "5"
spark.hadoop.hive.metastore.client.socket.timeout: "1800"
spark.hadoop.hive.metastore.uris: {{params.hive_metastore_uri}}
spark.hadoop.hive.tez.input.format: io.delta.hive.HiveInputFormat
If you want to pass custom args, you can use arguments:
spec:
...
arguments:
- "500000"
@jdonnelly-apixio Hi, I found the explanation of the sparkConf in sparkApplcation.
// SparkConf carries user-specified Spark configuration properties as they would use the "--conf" option in spark-submit.
As we know, in spark we can identify custom parameters after -- conf
, so I try passing custom ags in spec.sparkConf. That is the reason I try spec.sparkConf.
If I want to try custom args in kv, what should I do ? In this solution, it may not do well in spec.arguments.
i was able to pass kv as follows
arguments:
- "arg_1=1111"
- "arg_2=2222"
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi, I want to use custom parameter configuration. So in my sparkApplication.yaml
Also in the log of the spark operator controller, I found that the custom parameter is also in it.
However, in the log of spark-driver, it encounted a exception
I want to know if the spark operator supports custom parameter configuration? And why the custom parameter cannot get in the main.scala .