Closed PerilousApricot closed 4 years ago
Each TTreeDataSourceV2PartitionReader will allocate a new batch of threads, which managed to cause my to exhaust my available thread limit on my laptop by looping over spark.read().load("rootfile").write().parquet("outfile")
TTreeDataSourceV2PartitionReader
spark.read().load("rootfile").write().parquet("outfile")
https://github.com/spark-root/laurelin/blob/46e5e5064c67cc2f269969700bf0edc4008183c1/src/main/java/edu/vanderbilt/accre/laurelin/Root.java#L128-L134
fixed via a89c12219e1540849e821ecef3411484bda8b052
Each
TTreeDataSourceV2PartitionReader
will allocate a new batch of threads, which managed to cause my to exhaust my available thread limit on my laptop by looping overspark.read().load("rootfile").write().parquet("outfile")
https://github.com/spark-root/laurelin/blob/46e5e5064c67cc2f269969700bf0edc4008183c1/src/main/java/edu/vanderbilt/accre/laurelin/Root.java#L128-L134