Closed chrishavlin closed 1 year ago
It occurred to me that the scale
attribute might only work when the data_source
is the full ds.all_data()
. Here's the loop that re-scales when BlockCollection.scale=True
:
if self.scale:
left_min = np.ones(3, "f8") * np.inf
right_max = np.ones(3, "f8") * -np.inf
for block in self.data_source.tiles.traverse():
np.minimum(left_min, block.LeftEdge, left_min)
np.maximum(right_max, block.LeftEdge, right_max)
scale = right_max.max() - left_min.min()
for block in self.data_source.tiles.traverse():
block.LeftEdge -= left_min
block.LeftEdge /= scale
block.RightEdge -= left_min
block.RightEdge /= scale
Siince it only normalizes the edges by the blocks contained within data_source
, I suspect that there would be some mismatch with the unitary
scale of the dataset. Could/should we simply pass in the full ds.domain_width
to BlockCollection
?
thought some more about the last comment -- maybe scaling the data_source from 0 to 1 given the data_source bounds is actually the correct way so that the selection always fills the screen coordinates. So maybe everything the scaling actually works as intended!
I'll look back at this soon to see if it's actually an issue or not...
For datasets with large
code_length
, the grid traversal inBlockCollection
can have some scaling issues, leading to views that do not make sense....For example:
Sets up a very confusing viewpoint:
This happens because:
unitary
valuesBlockCollection
pullsLeftEdge
andRightEdge
those are also in code_lengthSo you end up with a view of a very small region...
The fix might be as simple as using the
scale
attribute when instantiating theBlockCollection
, but we should double check that it is working as expected and maybe add some logic for when to setscale=True
(should it always be set to True?).Example successfully using
scale
: