Closed jeongyooneo closed 6 years ago
@sanha Thanks! I've addressed the comment.
Thanks @wonook! As we've discussed offline, I've reverted to the good old style of calculating hash ranges. Distributing per-TaskGroup data evenly should be done elsewhere(for example at Partitioner
), since calculatingHashRange
simply divides at best effort given the already-partitioned hash ranges.
Looks good. I'll merge once the tests pass
This PR:
dstParallelism * HashRangeMultipler
idealSizePerTaskGroup
whose default value is set to 0.