But it seems like it reuses the existing Spark sessions, and these are not in effect. There is also the following warning: "WARN SparkSession: Using an existing Spark session; only runtime SQL configurations will take effect."
Do you have any ideas on how to increase the memory allocation? I see that the spark-iceberg container is only using under 2 GiB at the moment (docker container stats).
I found a solution. I added a new configuration row to /opt/spark/conf/spark-defaults.conf inside the spark-iceberg container. In my case 4 GB was enough:
spark.driver.memory 4g
I get the following exception in "df.groupby('partition_id').count().show()" cell when running the Loading Data -notebook:
I tried to add memory by adding new config options to the SparkSession builder:
But it seems like it reuses the existing Spark sessions, and these are not in effect. There is also the following warning: "WARN SparkSession: Using an existing Spark session; only runtime SQL configurations will take effect."
Do you have any ideas on how to increase the memory allocation? I see that the spark-iceberg container is only using under 2 GiB at the moment (docker container stats).