Need a deterministic function, similar to facilitator selection, which influences what portion of data nodes need to download. In advance of a more correct partitioning strategy, this can just download the subset of checkpoint data (and eventually verify with merkle) relative to that node's intended address validation space. This is a requirement for horizontal scalability.
This ties into other partitioning issues as well, but is the fundamental validation bottleneck. Other partitioning issues are not as important as this. (I.e. tx hash space, CB tip hash space, etc.)
Need a deterministic function, similar to facilitator selection, which influences what portion of data nodes need to download. In advance of a more correct partitioning strategy, this can just download the subset of checkpoint data (and eventually verify with merkle) relative to that node's intended address validation space. This is a requirement for horizontal scalability.
This ties into other partitioning issues as well, but is the fundamental validation bottleneck. Other partitioning issues are not as important as this. (I.e. tx hash space, CB tip hash space, etc.)