Open goshaQ opened 4 years ago
What you see there is basically back-pressure from Neo4j not being able to handle as many concurrent relationship writes. One option would be to reduce concurrency, another would be to re-partition the relationship DF on source, target. However, depending on the degree distribution this could also lead to multiple partitions / threads that write relationships between the same nodes, but maybe less often than just random partitioning.
I've noticed that writing relationship table (here) containing dozens of millions of rows with large batches (default value in the configuration, 100000) generates a lot of warnings like this
I wonder whether this can be minimized by re-partitioning of the Spark DF. Maybe it is not actually needed to write relationships in parallel because of a lot of retried transactions?